query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
4
100
subset
stringclasses
7 values
c6b5355d71b9f6a9ce670dea43e2f9d5
Software defined environments: An introduction
[ { "docid": "ac8c48688c0dfa60c2b268bfc7aab74a", "text": "management in software defined environments A. Alba G. Alatorre C. Bolik A. Corrao T. Clark S. Gopisetty R. Haas R. I. Kat B. S. Langston N. S. Mandagere D. Noll S. Padbidri R. Routray Y. Song C.-H. Tan A. Traeger The IT industry is experiencing a disruptive trend for which the entire data center infrastructure is becoming software defined and programmable. IT resources are provisioned and optimized continuously according to a declarative and expressive specification of the workload requirements. The software defined environments facilitate agile IT deployment and responsive data center configurations that enable rapid creation and optimization of value-added services for clients. However, this fundamental shift introduces new challenges to existing data center management solutions. In this paper, we focus on the storage aspect of the IT infrastructure and investigate its unique challenges as well as opportunities in the emerging software defined environments. Current state-of-the-art software defined storage (SDS) solutions are discussed, followed by our novel framework to advance the existing SDS solutions. In addition, we study the interactions among SDS, software defined compute (SDC), and software defined networking (SDN) to demonstrate the necessity of a holistic orchestration and to show that joint optimization can significantly improve the effectiveness and efficiency of the overall software defined environments.", "title": "" }, { "docid": "9683bb5dc70128d3981b10503cf3261a", "text": "This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization to the x86 architecture in 1999. Although virtual machine monitors (VMMs) had been around for decades, they were traditionally designed as part of monolithic, single-vendor architectures with explicit support for virtualization. In contrast, the x86 architecture lacked virtualization support, and the industry around it had disaggregated into an ecosystem, with different vendors controlling the computers, CPUs, peripherals, operating systems, and applications, none of them asking for virtualization. We chose to build our solution independently of these vendors.\n As a result, VMware Workstation had to deal with new challenges associated with (i) the lack of virtualization support in the x86 architecture, (ii) the daunting complexity of the architecture itself, (iii) the need to support a broad combination of peripherals, and (iv) the need to offer a simple user experience within existing environments. These new challenges led us to a novel combination of well-known virtualization techniques, techniques from other domains, and new techniques.\n VMware Workstation combined a hosted architecture with a VMM. The hosted architecture enabled a simple user experience and offered broad hardware compatibility. Rather than exposing I/O diversity to the virtual machines, VMware Workstation also relied on software emulation of I/O devices. The VMM combined a trap-and-emulate direct execution engine with a system-level dynamic binary translator to efficiently virtualize the x86 architecture and support most commodity operating systems. By relying on x86 hardware segmentation as a protection mechanism, the binary translator could execute translated code at near hardware speeds. The binary translator also relied on partial evaluation and adaptive retranslation to reduce the overall overheads of virtualization.\n Written with the benefit of hindsight, this article shares the key lessons we learned from building the original system and from its later evolution.", "title": "" } ]
[ { "docid": "b01028ef40b1fda74d0621c430ce9141", "text": "ETRI Journal, Volume 29, Number 2, April 2007 A novel low-voltage CMOS current feedback operational amplifier (CFOA) is presented. This realization nearly allows rail-to-rail input/output operations. Also, it provides high driving current capabilities. The CFOA operates at supply voltages of ±0.75 V with a total standby current of 304 μA. The circuit exhibits a bandwidth better than 120 MHz and a current drive capability of ±1 mA. An application of the CFOA to realize a new all-pass filter is given. PSpice simulation results using 0.25 μm CMOS technology parameters for the proposed CFOA and its application are given.", "title": "" }, { "docid": "907940110f89714bf20a8395cd8932d5", "text": "Polyphonic sound event detection (polyphonic SED) is an interesting but challenging task due to the concurrence of multiple sound events. Recently, SED methods based on convolutional neural networks (CNN) and recurrent neural networks (RNN) have shown promising performance. Generally, CNN are designed for local feature extraction while RNN are used to model the temporal dependency among these local features. Despite their success, it is still insufficient for existing deep learning techniques to separate individual sound event from their mixture, largely due to the overlapping characteristic of features. Motivated by the success of Capsule Networks (CapsNet), we propose a more suitable capsule based approach for polyphonic SED. Specifically, several capsule layers are designed to effectively select representative frequency bands for each individual sound event. The temporal dependency of capsule's outputs is then modeled by a RNN. And a dynamic threshold method is proposed for making the final decision based on RNN outputs. Experiments on the TUT-SED Synthetic 2016 dataset show that the proposed approach obtains an F1-score of 68.8% and an error rate of 0.45, outperforming the previous state-of-the-art method of 66.4% and 0.48, respectively.", "title": "" }, { "docid": "e5241f16c4bebf7c87d8dcc99ff38bc4", "text": "Several techniques for estimating the reliability of estimated error rates and for estimating the signicance of observed dierences in error rates are explored in this paper. Textbook formulas which assume a large test set, i.e., a normal distribution, are commonly used to approximate the condence limits of error rates or as an approximate signicance test for comparing error rates. Expressions for determining more exact limits and signicance levels for small samples are given here, and criteria are also given for determining when these more exact methods should be used. The assumed normal distribution gives a poor approximation to the condence interval in most cases, but is usually useful for signicance tests when the proper mean and variance expressions are used. A commonly used 62 signicance test uses an improper expression for , which is too low and leads to a high likelihood of Type I errors. Common machine learning methods for estimating signicance from observations on a single sample may be unreliable.", "title": "" }, { "docid": "6131fdbfe28aaa303b1ee4c29a65f766", "text": "Destination prediction is an essential task for many emerging location based applications such as recommending sightseeing places and targeted advertising based on destination. A common approach to destination prediction is to derive the probability of a location being the destination based on historical trajectories. However, existing techniques using this approach suffer from the “data sparsity problem”, i.e., the available historical trajectories is far from being able to cover all possible trajectories. This problem considerably limits the number of query trajectories that can obtain predicted destinations. We propose a novel method named Sub-Trajectory Synthesis (SubSyn) algorithm to address the data sparsity problem. SubSyn algorithm first decomposes historical trajectories into sub-trajectories comprising two neighbouring locations, and then connects the sub-trajectories into “synthesised” trajectories. The number of query trajectories that can have predicted destinations is exponentially increased by this means. Experiments based on real datasets show that SubSyn algorithm can predict destinations for up to ten times more query trajectories than a baseline algorithm while the SubSyn prediction algorithm runs over two orders of magnitude faster than the baseline algorithm. In this paper, we also consider the privacy protection issue in case an adversary uses SubSyn algorithm to derive sensitive location information of users. We propose an efficient algorithm to select a minimum number of locations a user has to hide on her trajectory in order to avoid privacy leak. Experiments also validate the high efficiency of the privacy protection algorithm.", "title": "" }, { "docid": "4b43203c83b46f0637d048c7016cce17", "text": "Efficient detection of three dimensional (3D) objects in point clouds is a challenging problem. Performing 3D descriptor matching or 3D scanning-window search with detector are both time-consuming due to the 3-dimensional complexity. One solution is to project 3D point cloud into 2D images and thus transform the 3D detection problem into 2D space, but projection at multiple viewpoints and rotations produce a large amount of 2D detection tasks, which limit the performance and complexity of the 2D detection algorithm choice. We propose to use convolutional neural network (CNN) for the 2D detection task, because it can handle all viewpoints and rotations for the same class of object together, as well as predicting multiple classes of objects with the same network, without the need for individual detector for each object class. We further improve the detection efficiency by concatenating two extra levels of early rejection networks with binary outputs before the multi-class detection network. Experiments show that our method has competitive overall performance with at least one-order of magnitude speedup comparing with latest 3D point cloud detection methods.", "title": "" }, { "docid": "3fe9dfb8334111ea56d40010ff7a70fa", "text": "1 Summary. The paper presents the LINK application, which is a decision-support system dedicated for operational and investigational activities of homeland security services. The paper briefly discusses issues of criminal analysis, possibilities of utilizing spatial (geographical) information together with crime mapping and spatial analyses. LINK – ŚRODOWISKO ANALIZ KRYMINALNYCH WYKORZYSTUJĄCE NARZĘRZIA ANALIZ GEOPRZESTRZENNYCH Streszczenie. Artykuł prezentuje system LINK będący zintegrowanym środowi-skiem wspomagania analizy kryminalnej przeznaczonym do działań operacyjnych i śledczych służb bezpieczeństwa wewnętrznego. W artykule omówiono problemy analizy kryminalnej, możliwość wykorzystania informacji o charakterze przestrzen-nym oraz narzędzia i metody analiz geoprzestrzennych.", "title": "" }, { "docid": "f25bf9cdbe3330dcb450a66ae25d19bd", "text": "The hypoplastic, weak lateral crus of the nose may cause concave alar rim deformity, and in severe cases, even alar rim collapse. These deformities may lead to both aesthetic disfigurement and functional impairment of the nose. The cephalic part of the lateral crus was folded and fixed to reinforce the lateral crus. The study included 17 women and 15 men with a median age of 24 years. The average follow-up period was 12 months. For 23 patients, the described technique was used to treat concave alar rim deformity, whereas for 5 patients, who had thick and sebaceous skin, it was used to prevent weakness of the alar rim. The remaining 4 patients underwent surgery for correction of a collapsed alar valve. Satisfactory results were achieved without any complications. Turn-in folding of the cephalic portion of lateral crus not only functionally supports the lateral crus, but also provides aesthetic improvement of the nasal tip as successfully as cephalic excision of the lateral crura.", "title": "" }, { "docid": "d03a86459dd461dcfac842ae55ae4ebb", "text": "Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SS-CNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition.", "title": "" }, { "docid": "2052b47be2b5e4d0c54ab0be6ae1958b", "text": "Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org .", "title": "" }, { "docid": "2a7002f1c3bf4460ca535966698c12b9", "text": "In recent years considerable research efforts have been devoted to compression techniques of convolutional neural networks (CNNs). Many works so far have focused on CNN connection pruning methods which produce sparse parameter tensors in convolutional or fully-connected layers. It has been demonstrated in several studies that even simple methods can effectively eliminate connections of a CNN. However, since these methods make parameter tensors just sparser but no smaller, the compression may not transfer directly to acceleration without support from specially designed hardware. In this paper, we propose an iterative approach named Auto-balanced Filter Pruning, where we pre-train the network in an innovative auto-balanced way to transfer the representational capacity of its convolutional layers to a fraction of the filters, prune the redundant ones, then re-train it to restore the accuracy. In this way, a smaller version of the original network is learned and the floating-point operations (FLOPs) are reduced. By applying this method on several common CNNs, we show that a large portion of the filters can be discarded without obvious accuracy drop, leading to significant reduction of computational burdens. Concretely, we reduce the inference cost of LeNet-5 on MNIST, VGG-16 and ResNet-56 on CIFAR-10 by 95.1%, 79.7% and 60.9%, respectively.", "title": "" }, { "docid": "dbf3a58ffe71e6ef61d6c69e85a3c743", "text": "A conventional automatic speech recognizer does not perform well in the presence of noise, while human listeners are able to segregate and recognize speech in noisy conditions. We study a novel feature based on an auditory periphery model for robust speech recognition. Specifically, gammatone frequency cepstral coefficients are derived by applying a cepstral analysis on gammatone filterbank responses. Our evaluations show that the proposed feature performs considerably better than conventional acoustic features. We further demonstrate that integrating the proposed feature with a computational auditory scene analysis system yields promising recognition performance.", "title": "" }, { "docid": "f92a7d9451f9d1213e9b1e479a4df006", "text": "Cet article passe en revue les vingt dernieÁ res anne es de recherche sur la culture et la ne gociation et pre sente les progreÁ s qui ont e te faits, les pieÁ ges dont il faut se de fier et les perspectives pour de futurs travaux. On a remarque que beaucoup de recherches avaient tendance aÁ suivre ces deux modeÁ les implicites: (1) l'influence de la culture sur les strate gies et l'aboutissement de la ne gociation et/ou (2) l'interaction de la culture et d'autres aspects de la situation imme diate sur les re sultats de la ne gociation. Cette recherche a porte sur un grand nombre de cultures et a mis en e vidence plus d'un modeÁ le inte ressant. Nous signalons cependant trois pieÁ ge caracte ristiques de cette litte rature, pieÁ ges qui nous ont handicape s. Tout d'abord, la plupart des travaux se satisfont de de nominations ge ographiques pour de signer les cultures et il est par suite souvent impossible de de terminer les dimensions culturelles qui rendent compte des diffe rences observe es. Ensuite, beaucoup de recherches ignorent les processus psychologiques (c'est-aÁ -dire les motivations et les cognitions) qui sont en jeu dans les ne gociations prenant place dans des cultures diffe rentes si bien que nous apprenons peu de choses aÁ propos de la psychologie de la ne gociation dans des contextes culturels diversifie s. On se heurte ainsi aÁ une « boõà te noire » que les travaux sur la culture et la ne gociation se gardent ge ne ralement d'ouvrir. Enfin, notre travail n'a recense qu'un nombre restreint de variables situationnelles imme diates intervenant dans des ne gociations prenant place dans des cultures diffe rentes; notre compre hension des effets mode rateurs de la culture sur la ne gociation est donc limite e. Nous proposons un troisieÁ me modeÁ le, plus complet, de la culture et de la ne gociation, pre sentons quelques donne es re centes en sa faveur et esquissons quelques perspectives pour l'avenir.", "title": "" }, { "docid": "cf0a4f12c23b42c08b6404fe897ed646", "text": "By performing computation at the location of data, non-Von Neumann (VN) computing should provide power and speed benefits over conventional (e.g., VN-based) approaches to data-centric workloads such as deep learning. For the on-chip training of largescale deep neural networks using nonvolatile memory (NVM) based synapses, success will require performance levels (e.g., deep neural network classification accuracies) that are competitive with conventional approaches despite the inherent imperfections of such NVM devices, and will also require massively parallel yet low-power read and write access. In this paper, we focus on the latter requirement, and outline the engineering tradeoffs in performing parallel reads and writes to large arrays of NVM devices to implement this acceleration through what is, at least locally, analog computing. We address how the circuit requirements for this new neuromorphic computing approach are somewhat reminiscent of, yet significantly different from, the well-known requirements found in conventional memory applications. We discuss tradeoffs that can influence both the effective acceleration factor (“speed”) and power requirements of such on-chip learning accelerators. P. Narayanan A. Fumarola L. L. Sanches K. Hosokawa S. C. Lewis R. M. Shelby G. W. Burr", "title": "" }, { "docid": "63eaccbbf34bc68cefa119056d488402", "text": "Interactive Image Generation User edits Generated images User edits Generated images User edits Generated images [1] Zhu et al. Learning a Discriminative Model for the Perception of Realism in Composite Images. ICCV 2015. [2] Goodfellow et al. Generative Adversarial Nets. NIPS 2014 [3] Radford et al. Unsupervised representation learning with deep convolutional generative adversarial networks. ICLR 2016 Reference : Natural images 0, I , Unif 1, 1", "title": "" }, { "docid": "e5ce1ddd50a728fab41043324938a554", "text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.", "title": "" }, { "docid": "bb240f2e536e5e5cd80fcca8c9d98171", "text": "We propose a novel metaphor interpretation method, Meta4meaning. It provides interpretations for nominal metaphors by generating a list of properties that the metaphor expresses. Meta4meaning uses word associations extracted from a corpus to retrieve an approximation to properties of concepts. Interpretations are then obtained as an aggregation or difference of the saliences of the properties to the tenor and the vehicle. We evaluate Meta4meaning using a set of humanannotated interpretations of 84 metaphors and compare with two existing methods for metaphor interpretation. Meta4meaning significantly outperforms the previous methods on this task.", "title": "" }, { "docid": "b75dd43655a70eaf0aaef43826de4337", "text": "Plagiarism detection has been considered as a classification problem which can be approximated with intrinsic strategies, considering self-based information from a given document, and external strategies, considering comparison techniques between a suspicious document and different sources. In this work, both intrinsic and external approaches for plagiarism detection are presented. First, the main contribution for intrinsic plagiarism detection is associated to the outlier detection approach for detecting changes in the author’s style. Then, the main contribution for the proposed external plagiarism detection is the space reduction technique to reduce the complexity of this plagiarism detection task. Results shows that our approach is highly competitive with respect to the leading research teams in plagiarism detection.", "title": "" }, { "docid": "cc8b0cd938bc6315864925a7a057e211", "text": "Despite the continuous growth in the number of smartphones around the globe, Short Message Service (SMS) still remains as one of the most popular, cheap and accessible ways of exchanging text messages using mobile phones. Nevertheless, the lack of security in SMS prevents its wide usage in sensitive contexts such as banking and health-related applications. Aiming to tackle this issue, this paper presents SMSCrypto, a framework for securing SMS-based communications in mobile phones. SMSCrypto encloses a tailored selection of lightweight cryptographic algorithms and protocols, providing encryption, authentication and signature services. The proposed framework is implemented both in Java (target at JVM-enabled platforms) and in C (for constrained SIM Card processors) languages, thus being suitable", "title": "" }, { "docid": "0ca588e42d16733bc8eef4e7957e01ab", "text": "Three-dimensional (3D) finite element (FE) models are commonly used to analyze the mechanical behavior of the bone under different conditions (i.e., before and after arthroplasty). They can provide detailed information but they are numerically expensive and this limits their use in cases where large or numerous simulations are required. On the other hand, 2D models show less computational cost, but the precision of results depends on the approach used for the simplification. Two main questions arise: Are the 3D results adequately represented by a 2D section of the model? Which approach should be used to build a 2D model that provides reliable results compared to the 3D model? In this paper, we first evaluate if the stem symmetry plane used for generating the 2D models of bone-implant systems adequately represents the results of the full 3D model for stair climbing activity. Then, we explore three different approaches that have been used in the past for creating 2D models: (1) without side-plate (WOSP), (2) with variable thickness side-plate and constant cortical thickness (SPCT), and (3) with variable thickness side-plate and variable cortical thickness (SPVT). From the different approaches investigated, a 2D model including a side-plate best represents the results obtained with the full 3D model with much less computational cost. The side-plate needs to have variable thickness, while the cortical bone thickness can be kept constant.", "title": "" }, { "docid": "f94385118e9fca123bae28093b288723", "text": "One of the major restrictions on the performance of videobased person re-id is partial noise caused by occlusion, blur and illumination. Since different spatial regions of a single frame have various quality, and the quality of the same region also varies across frames in a tracklet, a good way to address the problem is to effectively aggregate complementary information from all frames in a sequence, using better regions from other frames to compensate the influence of an image region with poor quality. To achieve this, we propose a novel Region-based Quality Estimation Network (RQEN), in which an ingenious training mechanism enables the effective learning to extract the complementary region-based information between different frames. Compared with other feature extraction methods, we achieved comparable results of 92.4%, 76.1% and 77.83% on the PRID 2011, iLIDS-VID and MARS, respectively. In addition, to alleviate the lack of clean large-scale person re-id datasets for the community, this paper also contributes a new high-quality dataset, named “Labeled Pedestrian in the Wild (LPW)” which contains 7,694 tracklets with over 590,000 images. Despite its relatively large scale, the annotations also possess high cleanliness. Moreover, it’s more challenging in the following aspects: the age of characters varies from childhood to elderhood; the postures of people are diverse, including running and cycling in addition to the normal walking state.", "title": "" } ]
scidocsrr
b6353659632254427774f5450abf6624
A competition on generalized software-based face presentation attack detection in mobile scenarios
[ { "docid": "db5865f8f8701e949a9bb2f41eb97244", "text": "This paper proposes a method for constructing local image descriptors which efficiently encode texture information and are suitable for histogram based representation of image regions. The method computes a binary code for each pixel by linearly projecting local image patches onto a subspace, whose basis vectors are learnt from natural images via independent component analysis, and by binarizing the coordinates in this basis via thresholding. The length of the binary code string is determined by the number of basis vectors. Image regions can be conveniently represented by histograms of pixels' binary codes. Our method is inspired by other descriptors which produce binary codes, such as local binary pattern and local phase quantization. However, instead of heuristic code constructions, the proposed approach is based on statistics of natural images and this improves its modeling capacity. The experimental results show that our method improves accuracy in texture recognition tasks compared to the state-of-the-art.", "title": "" }, { "docid": "2c9138a706f316a10104f2da9a054e44", "text": "Research on face spoofing detection has mainly been focused on analyzing the luminance of the face images, hence discarding the chrominance information which can be useful for discriminating fake faces from genuine ones. In this work, we propose a new face anti-spoofing method based on color texture analysis. We analyze the joint color-texture information from the luminance and the chrominance channels using a color local binary pattern descriptor. More specifically, the feature histograms are extracted from each image band separately. Extensive experiments on two benchmark datasets, namely CASIA face anti-spoofing and Replay-Attack databases, showed excellent results compared to the state-of-the-art. Most importantly, our inter-database evaluation depicts that the proposed approach showed very promising generalization capabilities.", "title": "" }, { "docid": "2967df08ad0b9987ce2d6cb6006d3e69", "text": "As a crucial security problem, anti-spoofing in biometrics, and particularly for the face modality, has achieved great progress in the recent years. Still, new threats arrive inform of better, more realistic and more sophisticated spoofing attacks. The objective of the 2nd Competition on Counter Measures to 2D Face Spoofing Attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks. The submitted propositions are evaluated on the Replay-Attack database and the achieved results are presented in this paper.", "title": "" } ]
[ { "docid": "b7a4eec912eb32b3b50f1b19822c44a1", "text": "Mining numerical data is a relatively difficult problem in data mining. Clustering is one of the techniques. We consider a database with numerical attributes, in which each transaction is viewed as a multi-dimensional vector. By studying the clusters formed by these vectors, we can discover certain behaviors hidden in the data. Traditional clustering algorithms find clusters in the full space of the data sets. This results in high dimensional clusters, which are poorly comprehensible to human. One important task in this setting is the ability to discover clusters embedded in the subspaces of a high-dimensional data set. This problem is known as subspace clustering. We follow the basic assumptions of previous work CLIQUE. It is found that the number of subspaces with clustering is very large, and a criterion called the coverage is proposed in CLIQUE for the pruning. In addition to coverage, we identify new useful criteria for this problem and propose an entropybased algorithm called ENCLUS to handle the criteria. Our major contributions are: (1) identify new meaningful criteria of high density and correlation of dimensions for goodness of clustering in subspaces, (2) introduce the use of entropy and provide evidence to support its use, (3) make use of two closure properties based on entropy to prune away uninteresting subspaces efficiently, (4) propose a mechanism to mine non-minimally correlated subspaces which are of interest because of strong clustering, (5) experiments are carried out to show the effectiveness of the proposed method.", "title": "" }, { "docid": "b02cfc336a6e1636dbcba46d4ee762e8", "text": "Peter C. Verhoef a,∗, Katherine N. Lemon b, A. Parasuraman c, Anne Roggeveen d, Michael Tsiros c, Leonard A. Schlesinger d a University of Groningen, Faculty of Economics and Business, P.O. Box 800, NL-9700 AV Groningen, The Netherlands b Boston College, Carroll School of Management, Fulton Hall 510, 140 Commonwealth Avenue, Chestnut Hill, MA 02467 United States c University of Miami, School of Business Administration, P.O. Box 24814, Coral Gables, FL 33124, United States d Babson College, 231 Forest Street, Wellesley, Massachusetts, United States", "title": "" }, { "docid": "34ceb0e84b4e000b721f87bcbec21094", "text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and the cost of implementation are also important concerns. A data encryption algorithm would not be of much use if it is secure enough but slow in performance because it is a common practice to embed encryption algorithms in other applications such as e-commerce, banking, and online transaction processing applications. Embedding of encryption algorithms in other applications also precludes a hardware implementation, and is thus a major cause of degraded overall performance of the system. In this paper, the four of the popular secret key encryption algorithms, i.e., DES, 3DES, AES (Rijndael), and the Blowfish have been implemented, and their performance is compared by encrypting input files of varying contents and sizes, on different Hardware platforms. The algorithms have been implemented in a uniform language, using their standard specifications, to allow a fair comparison of execution speeds. The performance results have been summarized and a conclusion has been presented. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.", "title": "" }, { "docid": "3bbc633650b9010ef5c76ea1d634a495", "text": "It is well known that significant metabolic change take place as cells are transformed from normal to malignant. This review focuses on the use of different bioinformatics tools in cancer metabolomics studies. The article begins by describing different metabolomics technologies and data generation techniques. Overview of the data pre-processing techniques is provided and multivariate data analysis techniques are discussed and illustrated with case studies, including principal component analysis, clustering techniques, self-organizing maps, partial least squares, and discriminant function analysis. Also included is a discussion of available software packages.", "title": "" }, { "docid": "2c0b3b58da77cc217e4311142c0aa196", "text": "In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem to unbalanced classification, when decisions result in unequal (asymmetric) losses. Experiments show improvements over state-of-the-art procedures.", "title": "" }, { "docid": "0190bdc5eafae72620f7fabbcdcc223c", "text": "Breast cancer is regarded as one of the most frequent mortality causes among women. As early detection of breast cancer increases the survival chance, creation of a system to diagnose suspicious masses in mammograms is important. In this paper, two automated methods are presented to diagnose mass types of benign and malignant in mammograms. In the first proposed method, segmentation is done using an automated region growing whose threshold is obtained by a trained artificial neural network (ANN). In the second proposed method, segmentation is performed by a cellular neural network (CNN) whose parameters are determined by a genetic algorithm (GA). Intensity, textural, and shape features are extracted from segmented tumors. GA is used to select appropriate features from the set of extracted features. In the next stage, ANNs are used to classify the mammograms as benign or malignant. To evaluate the performance of the proposed methods different classifiers (such as random forest, naïve Bayes, SVM, and KNN) are used. Results of the proposed techniques performed on MIAS and DDSM databases are promising. The obtained sensitivity, specificity, and accuracy rates are 96.87%, 95.94%, and 96.47%, respectively. 2014 Published by Elsevier Ltd.", "title": "" }, { "docid": "58925e0088e240f42836f0c5d29f88d3", "text": "SUMMARY\nDnaSP is a software package for the analysis of DNA polymorphism data. Present version introduces several new modules and features which, among other options allow: (1) handling big data sets (approximately 5 Mb per sequence); (2) conducting a large number of coalescent-based tests by Monte Carlo computer simulations; (3) extensive analyses of the genetic differentiation and gene flow among populations; (4) analysing the evolutionary pattern of preferred and unpreferred codons; (5) generating graphical outputs for an easy visualization of results.\n\n\nAVAILABILITY\nThe software package, including complete documentation and examples, is freely available to academic users from: http://www.ub.es/dnasp", "title": "" }, { "docid": "0e2989631390dc57d0bce81fb7b633c9", "text": "Among the most powerful tools for knowledge representation, we cite the ontology which allows knowledge structuring and sharing. In order to achieve efficient domain knowledge bases content, the latter has to establish well linked and knowledge between its components. In parallel, data mining techniques are used to discover hidden structures within large databases. In particular, association rules are used to discover co-occurrence relationships from past experiences. In this context, we propose, to develop a method to enrich existing ontologies with the identification of novel semantic relations between concepts in order to have a better coverage of the domain knowledge. The enrichment process is realized through discovered association rules. Nevertheless, this technique generates a large number of rules, where some of them, may be evident or already declared in the knowledge base. To this end, the generated association rules are categorized into three main classes: known knowledge, novel knowledge and unexpected rules. We demonstrate the applicability of this method using an existing mammographic ontology and patient’s records.", "title": "" }, { "docid": "a13a50d552572d08b4d1496ca87ac160", "text": "In recent years, mining with imbalanced data sets receives more and more attentions in both theoretical and practical aspects. This paper introduces the importance of imbalanced data sets and their broad application domains in data mining, and then summarizes the evaluation metrics and the existing methods to evaluate and solve the imbalance problem. Synthetic minority oversampling technique (SMOTE) is one of the over-sampling methods addressing this problem. Based on SMOTE method, this paper presents two new minority over-sampling methods, borderline-SMOTE1 and borderline-SMOTE2, in which only the minority examples near the borderline are over-sampled. For the minority class, experiments show that our approaches achieve better TP rate and F-value than SMOTE and random over-sampling methods.", "title": "" }, { "docid": "144c11393bef345c67595661b5b20772", "text": "BACKGROUND\nAppropriate placement of the bispectral index (BIS)-vista montage for frontal approach neurosurgical procedures is a neuromonitoring challenge. The standard bifrontal application interferes with the operative field; yet to date, no other placements have demonstrated good agreement. The purpose of our study was to compare the standard BIS montage with an alternate BIS montage across the nasal dorsum for neuromonitoring.\n\n\nMATERIALS AND METHODS\nThe authors performed a prospective study, enrolling patients and performing neuromonitoring using both the standard and the alternative montage on each patient. Data from the 2 placements were compared and analyzed using a Bland-Altman analysis, a Scatter plot analysis, and a matched-pair analysis.\n\n\nRESULTS\nOverall, 2567 minutes of data from each montage was collected on 28 subjects. Comparing the overall difference in score, the alternate BIS montage score was, on average, 2.0 (6.2) greater than the standard BIS montage score (P<0.0001). The Bland-Altman analysis revealed a difference in score of -2.0 (95% confidence interval, -14.1, 10.1), with 108/2567 (4.2%) of the values lying outside of the limit of agreement. The scatter plot analysis overall produced a trend line with the equation y=0.94x+0.82, with an R coefficient of 0.82.\n\n\nCONCLUSIONS\nWe determined that the nasal montage produces values that have slightly more variability compared with that ideally desired, but the variability is not clinically significant. In cases where the standard BIS-vista montage would interfere with the operative field, an alternative positioning of the BIS montage across the nasal bridge and under the eye can be used.", "title": "" }, { "docid": "7a9419f17bcdfd2f6e361bd97d487d9f", "text": "2. Relations 4. Dataset and Evaluation Cause-Effect Smoking causes cancer. Instrument-Agency The murderer used an axe. Product-Producer Bees make honey. Content-Container The cat is in the hat. Entity-Origin Vinegar is made from wine. Entity-Destination The car arrived at the station. Component-Whole The laptop has a fast processor. Member-Collection There are ten cows in the herd. Communication-Topic You interrupted a lecture on maths.  Each example consists of two (base) NPs marked with tags <e1> and <e2>:", "title": "" }, { "docid": "80c522a65fafb98886d1d3d848605e77", "text": "We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: //github.com/ramprs/grad-cam/ along with a demo on CloudCV [2] and video at youtu.be/COjUB9Izk6E.", "title": "" }, { "docid": "e29c6d0c4d5b82d7e968ab48d076a7ba", "text": "In recent years, a large number of researchers are endeavoring to develop wireless sensing and related applications as Wi-Fi devices become ubiquitous. As a significant research branch, gesture recognition has become one of the research hotspots. In this paper, we propose WiCatch, a novel device free gesture recognition system which utilizes the channel state information to recognize the motion of hands. First of all, with the aim of catching the weak signals reflected from hands, a novel data fusion-based interference elimination algorithm is proposed to diminish the interference caused by signals reflected from stationary objects and the direct signal from transmitter to receiver. Second, the system catches the signals reflected from moving hands and rebuilds the motion locus of the gesture by constructing the virtual antenna array based on signal samples in time domain. Finally, we adopt support vector machines to complete the classification. The extensive experimental results demonstrate that the WiCatch can achieves a recognition accuracy over 0.96. Furthermore, the WiCatch can be applied to two-hand gesture recognition and reach a recognition accuracy of 0.95.", "title": "" }, { "docid": "1ff73fcdeba269bc2bf9f45279cb3e45", "text": "The Internet of Things has attracted a plenty of research in this decade and imposed fascinating services where large numbers of heterogeneous-features entities socially collaborate together to solve complex scenarios. However, these entities need to trust each other prior to exchanging data or offering services. In this paper, we briefly present our ongoing project called Trust Service Platform, which offers trust assessment of any two entities in the Social Internet of Things to applications and services. We propose a trust model that incorporates both reputation properties as Recommendation and Reputation trust metrics; and knowledge-based property as Knowledge trust metric. For the trust service platform deployment, we propose a reputation system and a functional architecture with Trust Agent, Trust Broker and Trust Analysis and Management modules along with mechanisms and algorithms to deal with the three trust metrics. We also present a utility theory-based mechanism for trust calculation. To clarify our trust service platform, we describe the trust models and mechanisms in accordance with a trust car-sharing service. We believe this study offers the better understanding of the trust as a service in the platform and will impose many trustrelated research challenges as the future work. Keywords—Social Internet of Things; Trust as a Service; TaaS; Trust Model; Trust Metric; Trust Management; Recommendation; Reputation; Knowledge; Fuzzy; Utility Theory;", "title": "" }, { "docid": "fcf0ac3b52a1db116463e7376dae4950", "text": "Although the ability to perform complex cognitive operations is assumed to be impaired following acute marijuana smoking, complex cognitive performance after acute marijuana use has not been adequately assessed under experimental conditions. In the present study, we used a within-participant double-blind design to evaluate the effects acute marijuana smoking on complex cognitive performance in experienced marijuana smokers. Eighteen healthy research volunteers (8 females, 10 males), averaging 24 marijuana cigarettes per week, completed this three-session outpatient study; sessions were separated by at least 72-hrs. During sessions, participants completed baseline computerized cognitive tasks, smoked a single marijuana cigarette (0%, 1.8%, or 3.9% Δ9-THC w/w), and completed additional cognitive tasks. Blood pressure, heart rate, and subjective effects were also assessed throughout sessions. Marijuana cigarettes were administered in a double-blind fashion and the sequence of Δ9-THC concentration order was balanced across participants. Although marijuana significantly increased the number of premature responses and the time participants required to complete several tasks, it had no effect on accuracy on measures of cognitive flexibility, mental calculation, and reasoning. Additionally, heart rate and several subjective-effect ratings (e.g., “Good Drug Effect,” “High,” “Mellow”) were significantly increased in a Δ9-THC concentration-dependent manner. These data demonstrate that acute marijuana smoking produced minimal effects on complex cognitive task performance in experienced marijuana users.", "title": "" }, { "docid": "7bdbfd11a4aa723d3b5361f689d93698", "text": "We discuss the characteristics of constructive news comments, and present methods to identify them. First, we define the notion of constructiveness. Second, we annotate a corpus for constructiveness. Third, we explore whether available argumentation corpora can be useful to identify constructiveness in news comments. Our model trained on argumentation corpora achieves a top accuracy of 72.59% (baseline=49.44%) on our crowdannotated test data. Finally, we examine the relation between constructiveness and toxicity. In our crowd-annotated data, 21.42% of the non-constructive comments and 17.89% of the constructive comments are toxic, suggesting that non-constructive comments are not much more toxic than constructive comments.", "title": "" }, { "docid": "4a817638751fdfe46dfccc43eea76cbd", "text": "In this article we present a classification scheme for quantum computing technologies that is based on the characteristics most relevant to computer systems architecture. The engineering trade-offs of execution speed, decoherence of the quantum states, and size of systems are described. Concurrency, storage capacity, and interconnection network topology influence algorithmic efficiency, while quantum error correction and necessary quantum state measurement are the ultimate drivers of logical clock speed. We discuss several proposed technologies. Finally, we use our taxonomy to explore architectural implications for common arithmetic circuits, examine the implementation of quantum error correction, and discuss cluster-state quantum computation.", "title": "" }, { "docid": "f141bd66dc2a842c21f905e3e01fa93c", "text": "In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature", "title": "" }, { "docid": "4fbc692a4291a92c6fa77dc78913e587", "text": "Achieving artificial visual reasoning — the ability to answer image-related questions which require a multi-step, high-level process — is an important step towards artificial general intelligence. This multi-modal task requires learning a questiondependent, structured reasoning process over images from language. Standard deep learning approaches tend to exploit biases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization approach achieves state-ofthe-art results on the CLEVR Visual Reasoning benchmark with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step process. Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively.", "title": "" } ]
scidocsrr
69ad47bebd6b43816e3e7acba16c3c1b
Smartphone addiction and its relationship with social anxiety and loneliness
[ { "docid": "21bb289fb932b23d95fee7d40401d70c", "text": "Mobile phone use is banned or regulated in some circumstances. Despite recognized safety concerns and legal regulations, some people do not refrain from using mobile phones. Such problematic mobile phone use can be considered to be an addiction-like behavior. To find the potential predictors, we examined the correlation between problematic mobile phone use and personality traits reported in addiction literature, which indicated that problematic mobile phone use was a function of gender, self-monitoring, and approval motivation but not of loneliness. These findings suggest that the measurements of these addictive personality traits would be helpful in the screening and intervention of potential problematic users of mobile phones.", "title": "" }, { "docid": "d72db190e011d0e8260465ce259111df", "text": "This study developed a Smartphone Addiction Proneness Scale (SAPS) based on the existing internet and cellular phone addiction scales. For the development of this scale, 29 items (1.5 times the final number of items) were initially selected as preliminary items, based on the previous studies on internet/phone addiction as well as the clinical experience of involved experts. The preliminary scale was administered to a nationally representative sample of 795 students in elementary, middle, and high schools across South Korea. Then, final 15 items were selected according to the reliability test results. The final scale consisted of four subdomains: (1) disturbance of adaptive functions, (2) virtual life orientation, (3) withdrawal, and (4) tolerance. The final scale indicated a high reliability with Cronbach's α of .880. Support for the scale's criterion validity has been demonstrated by its relationship to the internet addiction scale, KS-II (r  =  .49). For the analysis of construct validity, we tested the Structural Equation Model. The results showed the four-factor structure to be valid (NFI  =  .943, TLI  =  .902, CFI  =  .902, RMSEA  =  .034). Smartphone addiction is gaining a greater spotlight as possibly a new form of addiction along with internet addiction. The SAPS appears to be a reliable and valid diagnostic scale for screening adolescents who may be at risk of smartphone addiction. Further implications and limitations are discussed.", "title": "" }, { "docid": "1ebb827b9baf3307bc20de78538d23e7", "text": "0747-5632/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2013.07.003 ⇑ Corresponding author. Address: University of North Texas, College of Business, 1155 Union Circle #311160, Denton, TX 76203-5017, USA. E-mail addresses: mohammad.salehan@unt.edu (M. Salehan), arash.negah ban@unt.edu (A. Negahban). 1 These authors contributed equally to the work. Mohammad Salehan 1,⇑, Arash Negahban 1", "title": "" } ]
[ { "docid": "db79c4fc00f18c3d7822c9f79d1a4a83", "text": "We propose a new pipeline for optical flow computation, based on Deep Learning techniques. We suggest using a Siamese CNN to independently, and in parallel, compute the descriptors of both images. The learned descriptors are then compared efficiently using the L2 norm and do not require network processing of patch pairs. The success of the method is based on an innovative loss function that computes higher moments of the loss distributions for each training batch. Combined with an Approximate Nearest Neighbor patch matching method and a flow interpolation technique, state of the art performance is obtained on the most challenging and competitive optical flow benchmarks.", "title": "" }, { "docid": "e8fcd0e7e27a4f17f963bbdbd94e6406", "text": "Visual Interpretation of gestures can be useful in accomplishing natural Human Computer Interactions (HCI). In this paper we proposed a method for recognizing hand gestures. We have designed a system which can identify specific hand gestures and use them to convey information. At any time, a user can exhibit his/her hand doing a specific gesture in front of a web camera linked to a computer. Firstly, we captured the hand gesture of a user and stored it on disk. Then we read those videos captured one by one, converted them to binary images and created 3D Euclidian Space of binary values. We have used supervised feed-forward neural net based training and back propagation algorithm for classifying hand gestures into ten categories: hand pointing up, pointing down, pointing left, pointing right and pointing front and number of fingers user was showing. We could achieve up to 89% correct results on a typical test set.", "title": "" }, { "docid": "98926294ff7f9e13f8187e8f261639e9", "text": "The resistive cross-point array architecture has been proposed for on-chip implementation of weighted sum and weight update operations in neuro-inspired learning algorithms. However, several limiting factors potentially hamper the learning accuracy, including the nonlinearity and device variations in weight update, and the read noise, limited ON/OFF weight ratio and array parasitics in weighted sum. With unsupervised sparse coding as a case study algorithm, this paper employs device-algorithm co-design methodologies to quantify and mitigate the impact of these non-ideal properties on the accuracy. Our analysis shows that the realistic properties in weight update are tolerable, while those in weighted sum are detrimental to the accuracy. With calibration of realistic synaptic behaviors from experimental data, our study shows that the recognition accuracy of MNIST handwriting digits degrades from ∼96 to ∼30 percent. The strategies to mitigate this accuracy loss include 1) redundant cells to alleviate the impact of device variations; 2) a dummy column to eliminate the off-state current; and 3) selector and larger wire width to reduce IR drop along interconnects. The selector also reduces the leakage power in weight update. With improved properties by these strategies, the accuracy increases back to ∼95 percent, enabling reliable integration of realistic synaptic devices in neuromorphic systems.", "title": "" }, { "docid": "1f02f9dae964a7e326724faa79f5ddc3", "text": "The purpose of this review was to examine published research on small-group development done in the last ten years that would constitute an empirical test of Tuckman’s (1965) hypothesis that groups go through these stages of “forming,” “storming,” “norming,” and “performing.” Of the twenty-two studies reviewed, only one set out to directly test this hypothesis, although many of the others could be related to it. Following a review of these studies, a fifth stage, “adjourning.” was added to the hypothesis, and more empirical work was recommended.", "title": "" }, { "docid": "bcd8757af7d00d198a1799a3bc145c2c", "text": "Trust is a critical social process that helps us to cooperate with others and is present to some degree in all human interaction. However, the underlying brain mechanisms of conditional and unconditional trust in social reciprocal exchange are still obscure. Here, we used hyperfunctional magnetic resonance imaging, in which two strangers interacted online with one another in a sequential reciprocal trust game while their brains were simultaneously scanned. By designing a nonanonymous, alternating multiround game, trust became bidirectional, and we were able to quantify partnership building and maintenance. Using within- and between-brain analyses, an examination of functional brain activity supports the hypothesis that the preferential activation of different neuronal systems implements these two trust strategies. We show that the paracingulate cortex is critically involved in building a trust relationship by inferring another person's intentions to predict subsequent behavior. This more recently evolved brain region can be differently engaged to interact with more primitive neural systems in maintaining conditional and unconditional trust in a partnership. Conditional trust selectively activated the ventral tegmental area, a region linked to the evaluation of expected and realized reward, whereas unconditional trust selectively activated the septal area, a region linked to social attachment behavior. The interplay of these neural systems supports reciprocal exchange that operates beyond the immediate spheres of kinship, one of the distinguishing features of the human species.", "title": "" }, { "docid": "477e4a6930d147a598e1e0c453062ed2", "text": "Stock markets are driven by a multitude of dynamics in which facts and beliefs play a major role in affecting the price of a company’s stock. In today’s information age, news can spread around the globe in some cases faster than they happen. While it can be beneficial for many applications including disaster prevention, our aim in this thesis is to use the timely release of information to model the stock market. We extract facts and beliefs from the population using one of the fastest growing social networking tools on the Internet, namely Twitter. We examine the use of Natural Language Processing techniques with a predictive machine learning approach to analyze millions of Twitter posts from which we draw distinctive features to create a model that enables the prediction of stock prices. We selected several stocks from the NASDAQ stock exchange and collected Intra-Day stock quotes during a period of two weeks. We build different feature representations from the raw Twitter posts and combined them with the stock price in order to build a regression model using the Support Vector Regression algorithm. We were able to build models of the stocks which predicted discrete prices that were close to a strong baseline. We further investigated the prediction of future prices, on average predicting 15 minutes ahead of the actual price, and evaluated the results using a Virtual Stock Trading Engine. These results were in general promising, but contained also some random variations across the different datasets.", "title": "" }, { "docid": "172f105b7b09f19b278742af95a8d9bb", "text": "50 AI MAGAZINE The Winograd Schema Challenge (WSC) (Levesque, Davis, and Morgenstern, 2012) was proposed by Hector Levesque in 2011 as an alternative to the Turing test. Turing (1950) had first introduced the notion of testing a computer system’s intelligence by assessing whether it could fool a human judge into thinking that it was conversing with a human rather a computer. Although intuitively appealing and arbitrarily flexible — in theory, a human can ask the computer system that is being tested wide-ranging questions about any subject desired — in practice, the execution of the Turing test turns out to be highly susceptible to systems that few people would wish to call intelligent. The Loebner Prize Competition (Christian 2011) is in particular associated with the development of chatterbots that are best viewed as successors to ELIZA (Weizenbaum 1966), the program that fooled people into thinking that they were talking to a human psychotherapist by cleverly turning a person’s statements into questions of the sort a therapist would ask. The knowledge and inference that characterize conversations of substance — for example, discussing alternate metaphors in sonnets of Shakespeare — and which Turing presented as examples of the sorts of conversation that an intelligent system should be able to produce, are absent in these chatterbots. The focus is merely on engaging in surfacelevel conversation that can fool some humans who do not delve too deeply into a conversation, for at least a few minutes, into thinking that they are speaking to another person. The widely reported triumph of the chatterbot Eugene Goostman in fooling 10 out of 30 judges to judge, after a fiveminute conversation, that it was human (University of Read-", "title": "" }, { "docid": "a0c240efadc361ea36b441d34fc10a26", "text": "We describe a single-feed stacked patch antenna design that is capable of simultaneously receiving both right hand circularly polarized (RHCP) satellite signals within the GPS LI frequency band and left hand circularly polarized (LHCP) satellite signals within the SDARS frequency band. In addition, the design provides improved SDARS vertical linear polarization (VLP) gain for terrestrial repeater signal reception at low elevation angles as compared to a current state of the art SDARS patch antenna.", "title": "" }, { "docid": "0a414cd886ebf2a311d27b17c53e535f", "text": "We consider the problem of classifying documents not by topic, but by overall sentiment. Previous approaches to sentiment classification have favored domain-specific, supervised machine learning (Naive Bayes, maximum entropy classification, and support vector machines). Inherent in these methodologies is the need for annotated training data. Building on previous work, we examine an unsupervised system of iteratively extracting positive and negative sentiment items which can be used to classify documents. Our method is completely unsupervised and only requires linguistic insight into the semantic orientation of sentiment.", "title": "" }, { "docid": "305f0c417d1e6f6189c431078b359793", "text": "Sentence relation extraction aims to extract relational facts from sentences, which is an important task in natural language processing field. Previous models rely on the manually labeled supervised dataset. However, the human annotation is costly and limits to the number of relation and data size, which is difficult to scale to large domains. In order to conduct largely scaled relation extraction, we utilize an existing knowledge base to heuristically align with texts, which not rely on human annotation and easy to scale. However, using distant supervised data for relation extraction is facing a new challenge: sentences in the distant supervised dataset are not directly labeled and not all sentences that mentioned an entity pair can represent the relation between them. To solve this problem, we propose a novel model with reinforcement learning. The relation of the entity pair is used as distant supervision and guide the training of relation extractor with the help of reinforcement learning method. We conduct two types of experiments on a publicly released dataset. Experiment results demonstrate the effectiveness of the proposed method compared with baseline models, which achieves 13.36% improvement.", "title": "" }, { "docid": "2b30506690acbae9240ef867e961bc6c", "text": "Background Breast milk can turn pink with Serratia marcescens colonization, this bacterium has been associated with several diseases and even death. It is seen most commonly in the intensive care settings. Discoloration of the breast milk can lead to premature termination of nursing. We describe two cases of pink-colored breast milk in which S. marsescens was isolated from both the expressed breast milk. Antimicrobial treatment was administered to the mothers. Return to breastfeeding was successful in both the cases. Conclusions Pink breast milk is caused by S. marsescens colonization. In such cases,early recognition and treatment before the development of infection is recommended to return to breastfeeding.", "title": "" }, { "docid": "f52073ddb9c4507d11190cd13637b91d", "text": "The application of fuzzy-based control strategies has recently gained enormous recognition as an approach for the rapid development of effective controllers for nonlinear time-variant systems. This paper describes the preliminary research and implementation of a fuzzy logic based controller to control the wheel slip for electric vehicle antilock braking systems (ABSs). As the dynamics of the braking systems are highly nonlinear and time variant, fuzzy control offers potential as an important tool for development of robust traction control. Simulation studies are employed to derive an initial rule base that is then tested on an experimental test facility representing the dynamics of a braking system. The test facility is composed of an induction machine load operating in the generating region. It is shown that the torque-slip characteristics of an induction motor provides a convenient platform for simulating a variety of tire/road driving conditions, negating the initial requirement for skid-pan trials when developing algorithms. The fuzzy membership functions were subsequently refined by analysis of the data acquired from the test facility while simulating operation at a high coefficient of friction. The robustness of the fuzzy-logic slip regulator is further tested by applying the resulting controller over a wide range of operating conditions. The results indicate that ABS/traction control may substantially improve longitudinal performance and offer significant potential for optimal control of driven wheels, especially under icy conditions where classical ABS/traction control schemes are constrained to operate very conservatively.", "title": "" }, { "docid": "b43c4d5d97120963a3ea84a01d029819", "text": "Research into the translation of the output of automatic speech recognition (ASR) systems is hindered by the dearth of datasets developed for that explicit purpose. For SpanishEnglish translation, in particular, most parallel data available exists only in vastly different domains and registers. In order to support research on cross-lingual speech applications, we introduce the Fisher and Callhome Spanish-English Speech Translation Corpus, supplementing existing LDC audio and transcripts with (a) ASR 1-best, lattice, and oracle output produced by the Kaldi recognition system and (b) English translations obtained on Amazon’s Mechanical Turk. The result is a four-way parallel dataset of Spanish audio, transcriptions, ASR lattices, and English translations of approximately 38 hours of speech, with defined training, development, and held-out test sets. We conduct baseline machine translation experiments using models trained on the provided training data, and validate the dataset by corroborating a number of known results in the field, including the utility of in-domain (information, conversational) training data, increased performance translating lattices (instead of recognizer 1-best output), and the relationship between word error rate and BLEU score.", "title": "" }, { "docid": "16946ba4be3cf8683bee676b5ac5e0de", "text": "1. The types of perfect Interpretation-wise, several types of perfect expressions have been recognized in the literature (e. To illustrate, a present perfect can have one of at least three interpretations: (1) a. Since 2000, Alexandra has lived in LA. UNIVERSAL b. Alexandra has been in LA (before). EXPERIENTIAL c. Alexandra has (just) arrived in LA. RESULTATIVE The three types of perfect make different claims about the temporal location of the underlying eventuality, i.e., of live in LA in (1a), be in LA in (1b), arrive in LA in (1c), with respect to a reference time. The UNIVERSAL perfect, as in (1a), asserts that the underlying eventuality holds throughout an interval, delimited by the time of utterance and a certain time in the past (in this case, the year 2000). The EXPERIENTIAL perfect, as in (1b), asserts that the underlying eventuality holds at a proper subset of an interval, extending back from the utterance time. The RESULTATIVE perfect makes the same assertion as the Experiential perfect, with the added meaning that the result of the underlying eventuality (be in LA is the result of arrive in LA) holds at the utterance time. The distinction between the Experiential and the Resultative perfects is rather subtle. The two are commonly grouped together as the EXISTENTIAL perfect (McCawley 1971, Mittwoch 1988) and this terminology is adopted here as well. 1 Two related questions arise: (i) Is the distinction between the three types of perfect grammatically based? (ii) If indeed so, then is it still possible to posit a common representation for the perfect – a uniform structure with a single meaning – which, in combination with certain other syntactic components , each with a specialized meaning, results in the three different readings? This paper suggests that the answer to both questions is yes. To start addressing these questions, let us look at some of the known factors behind the various interpretations of the perfect. It has to be noted that the different perfect readings are not a peculiarity of the present perfect despite the fact that they are primarily discussed in relation to that form. The same interpretations are available to the past, future and nonfinite per", "title": "" }, { "docid": "8660613f0c17aef86bffe1107257e316", "text": "The enumeration and characterization of circulating tumor cells (CTCs) in the peripheral blood and disseminated tumor cells (DTCs) in bone marrow may provide important prognostic information and might help to monitor efficacy of therapy. Since current assays cannot distinguish between apoptotic and viable DTCs/CTCs, it is now possible to apply a novel ELISPOT assay (designated 'EPISPOT') that detects proteins secreted/released/shed from single epithelial cancer cells. Cells are cultured for a short time on a membrane coated with antibodies that capture the secreted/released/shed proteins which are subsequently detected by secondary antibodies labeled with fluorochromes. In breast cancer, we measured the release of cytokeratin-19 (CK19) and mucin-1 (MUC1) and demonstrated that many patients harbored viable DTCs, even in patients with apparently localized tumors (stage M(0): 54%). Preliminary clinical data showed that patients with DTC-releasing CK19 have an unfavorable outcome. We also studied CTCs or CK19-secreting cells in the peripheral blood of M1 breast cancer patients and showed that patients with CK19-SC had a worse clinical outcome. In prostate cancer, we used prostate-specific antigen (PSA) secretion as marker and found that a significant fraction of CTCs secreted fibroblast growth factor-2 (FGF2), a known stem cell growth factor. In conclusion, the EPISPOT assay offers a new opportunity to detect and characterize viable DTCs/CTCs in cancer patients and it can be extended to a multi-parameter analysis revealing a CTC/DTC protein fingerprint.", "title": "" }, { "docid": "45fe8a9188804b222df5f12bc9a486bc", "text": "There is renewed interest in the application of gypsum to agricultural lands, particularly of gypsum produced during flue gas desulfurization (FGD) at coal-burning power plants. We studied the effects of land application of FGD gypsum to corn ( L.) in watersheds draining to the Great Lakes. The FGD gypsum was surface applied at 11 sites at rates of 0, 1120, 2240, and 4480 kg ha after planting to 3-m by 7.6-m field plots. Approximately 12 wk after application, penetration resistance and hydraulic conductivity were measured in situ, and samples were collected for determination of bulk density and aggregate stability. No treatment effect was detected for penetration resistance or hydraulic conductivity. A positive treatment effect was seen for bulk density at only 2 of 10 sites tested. Aggregate stability reacted similarly across all sites and was decreased with the highest application of FGD gypsum, whereas the lower rates were not different from the control. Overall, there were few beneficial effects of the FGD gypsum to soil physical properties in the year of application.", "title": "" }, { "docid": "8891a6c47a7446bb7597471796900867", "text": "The component \"thing\" of the Internet of Things does not yet exist in current business process modeling standards. The \"thing\" is the essential and central concept of the Internet of Things, and without its consideration we will not be able to model the business processes of the future, which will be able to measure or change states of objects in our real-world environment. The presented approach focuses on integrating the concept of the Internet of Things into the meta-model of the process modeling standard BPMN 2.0 as standard-conform as possible. By a terminological and conceptual delimitation, three components of the standard are examined and compared towards a possible expansion. By implementing the most appropriate solution, the new thing concept becomes usable for modelers, both as a graphical and machine-readable element.", "title": "" }, { "docid": "1f130c43ca2dd1431923ef1bbe44d049", "text": "BACKGROUND\nCeaseFire, using an infectious disease approach, addresses violence by partnering hospital resources with the community by providing violence interruption and community-based services for an area roughly composed of a single city zip code (70113). Community-based violence interrupters start in the trauma center from the moment penetrating trauma occurs, through hospital stay, and in the community after release. This study interprets statistics from this pilot program, begun May 2012. We hypothesize a decrease in penetrating trauma rates in the target area compared with others after program implementation.\n\n\nMETHODS\nThis was a 3-year prospective data collection of trauma registry from May 2010 to May 2013. All intentional, target area, penetrating trauma treated at our Level I trauma center received immediate activation of CeaseFire personnel. Incidences of violent trauma and rates of change, by zip code, were compared with the same period for 2 years before implementation.\n\n\nRESULTS\nDuring this period, the yearly incidence of penetrating trauma in Orleans Parish increased. Four of the highest rates were found in adjacent zip codes: 70112, 70113, 70119, and 70125. Average rates per 100,000 were 722.7, 523.6, 286.4, and 248, respectively. These areas represent four of the six zip codes citywide that saw year-to-year increases in violent trauma during this period. Zip 70113 saw a lower rate of rise in trauma compared with 70112 and a higher but comparable rise compared with that of 70119 and 70125.\n\n\nCONCLUSION\nHospital-based intervention programs that partner with culturally appropriate personnel and resources outside the institution walls have potential to have meaningful impact over the long term. While few conclusions of the effect of such a program can be drawn in a 12-month period, we anticipate long-term changes in the numbers of penetrating injuries in the target area and in the rest of the city as this program expands.\n\n\nLEVEL OF EVIDENCE\nTherapeutic study, level IV.", "title": "" }, { "docid": "1509a06ce0b2395466fe462b1c3bd333", "text": "This paper addresses mechanics, design, estimation and control for aerial grasping. We present the design of several light-weight, low-complexity grippers that allow quadrotors to grasp and perch on branches or beams and pick up and transport payloads. We then show how the robot can use rigid body dynamic models and sensing to verify a grasp, to estimate the the inertial parameters of the grasped object, and to adapt the controller and improve performance during flight. We present experimental results with different grippers and different payloads and show the robot's ability to estimate the mass, the location of the center of mass and the moments of inertia to improve tracking performance.", "title": "" }, { "docid": "488b0adfe43fc4dbd9412d57284fc856", "text": "We describe the results of an experiment in which several conventional programming languages, together with the functional language Haskell, were used to prototype a Naval Surface Warfare Center (NSWC) requirement for a Geometric Region Server. The resulting programs and development metrics were reviewed by a committee chosen by the Navy. The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++. ∗This work was supported by the Advanced Research Project Agency and the Office of Naval Research under Arpa Order 8888, Contract N00014-92-C-0153.", "title": "" } ]
scidocsrr
780b401aec694cbca6a50d29f5e0c759
From information to knowledge: harvesting entities and relationships from web sources
[ { "docid": "7716416ab97b35ce218673f48b31a5c2", "text": "The availability of large scale data sets of manually annotated predicate-argument structures has recently favored the use of machine learning approaches to the design of automated semantic role labeling (SRL) systems. The main research in this area relates to the design choices for feature representation and for effective decompositions of the task in different learning models. Regarding the former choice, structural properties of full syntactic parses are largely employed as they represent ways to encode different principles suggested by the linking theory between syntax and semantics. The latter choice relates to several learning schemes over global views of the parses. For example, re-ranking stages operating over alternative predicate-argument sequences of the same sentence have shown to be very effective. In this article, we propose several kernel functions to model parse tree properties in kernel-based machines, for example, perceptrons or support vector machines. In particular, we define different kinds of tree kernels as general approaches to feature engineering in SRL. Moreover, we extensively experiment with such kernels to investigate their contribution to individual stages of an SRL architecture both in isolation and in combination with other traditional manually coded features. The results for boundary recognition, classification, and re-ranking stages provide systematic evidence about the significant impact of tree kernels on the overall accuracy, especially when the amount of training data is small. As a conclusive result, tree kernels allow for a general and easily portable feature engineering method which is applicable to a large family of natural language processing tasks.", "title": "" }, { "docid": "c9e47bfe0f1721a937ba503ed9913dba", "text": "The Web contains a vast amount of structured information such as HTML tables, HTML lists and deep-web databases; there is enormous potential in combining and re-purposing this data in creative ways. However, integrating data from this relational web raises several challenges that are not addressed by current data integration systems or mash-up tools. First, the structured data is usually not published cleanly and must be extracted (say, from an HTML list) before it can be used. Second, due to the vastness of the corpus, a user can never know all of the potentially-relevant databases ahead of time (much less write a wrapper or mapping for each one); the source databases must be discovered during the integration process. Third, some of the important information regarding the data is only present in its enclosing web page and needs to be extracted appropriately. This paper describes Octopus, a system that combines search, extraction, data cleaning and integration, and enables users to create new data sets from those found on the Web. The key idea underlying Octopus is to offer the user a set of best-effort operators that automate the most labor-intensive tasks. For example, the Search operator takes a search-style keyword query and returns a set of relevance-ranked and similarity-clustered structured data sources on the Web; the Context operator helps the user specify the semantics of the sources by inferring attribute values that may not appear in the source itself, and the Extend operator helps the user find related sources that can be joined to add new attributes to a table. Octopus executes some of these operators automatically, but always allows the user to provide feedback and correct errors. We describe the algorithms underlying each of these operators and experiments that demonstrate their efficacy.", "title": "" } ]
[ { "docid": "f202e380dfd1022e77a04212394be7e1", "text": "As usage of cloud computing increases, customers are mainly concerned about choosing cloud infrastructure with sufficient security. Concerns are greater in the multitenant environment on a public cloud. This paper addresses the security assessment of OpenStack open source cloud solution and virtual machine instances with different operating systems hosted in the cloud. The methodology and realized experiments target vulnerabilities from both inside and outside the cloud. We tested four different platforms and analyzed the security assessment. The main conclusions of the realized experiments show that multi-tenant environment raises new security challenges, there are more vulnerabilities from inside than outside and that Linux based Ubuntu, CentOS and Fedora are less vulnerable than Windows. We discuss details about these vulnerabilities and show how they can be solved by appropriate patches and other solutions. Keywords-Cloud Computing; Security Assessment; Virtualization.", "title": "" }, { "docid": "0ecaccc94977a15cbaee4aaa08509295", "text": "This paper reviews the use of socially interactive robots to assist in the therapy of children with autism. The extent to which the robots were successful in helping the children in their social, emotional, and communication deficits was investigated. Child-robot interactions were scrutinized with respect to the different target behaviours that are to be elicited from a child during therapy. These behaviours were thoroughly examined with respect to a child's development needs. Most importantly, experimental data from the surveyed works were extracted and analyzed in terms of the target behaviours and how each robot was used during a therapy session to achieve these behaviours. The study concludes by categorizing the different therapeutic roles that these robots were observed to play, and highlights the important design features that enable them to achieve high levels of effectiveness in autism therapy.", "title": "" }, { "docid": "90c46b6e7f125481e966b746c5c76c97", "text": "Black-box mutational fuzzing is a simple yet effective technique to find bugs in software. Given a set of program-seed pairs, we ask how to schedule the fuzzings of these pairs in order to maximize the number of unique bugs found at any point in time. We develop an analytic framework using a mathematical model of black-box mutational fuzzing and use it to evaluate 26 existing and new randomized online scheduling algorithms. Our experiments show that one of our new scheduling algorithms outperforms the multi-armed bandit algorithm in the current version of the CERT Basic Fuzzing Framework (BFF) by finding 1.5x more unique bugs in the same amount of time.", "title": "" }, { "docid": "72b77a9a80d7d26e9c5b0b070f8eceb8", "text": "3D City models have so far neglected utility networks in built environments, both interior and exterior. Many urban applications, e.g. emergency response or maintenance operations, are looking for such an integration of interior and exterior utility. Interior utility is usually created and maintained using Building Information Model (BIM) systems, while exterior utility is stored, managed and analyzed using GIS. Researchers have suggested that the best approach for BIM/GIS integration is harmonized semantics, which allow formal mapping between the BIM and real world GIS. This paper provides preliminary ideas and directions for how to acquire information from BIM/Industry Foundation Class (IFC) and map it to CityGML utility network Application Domain Extension (ADE). The investigation points out that, in most cases, there is a direct one-to-one mapping between IFC schema and UtilityNetworkADE schema, and only in one case there is one-to-many mapping; related to logical connectivity since there is no exact concept to represent the case in UtilityNetworkADE. Many examples are shown of partial IFC files and their possible translation in order to be represented in UtilityNetworkADE classes. DRAFT VERSION of the paper to be published in Kolbe, T. H.; König, G.; Nagel, C. (Eds.) 2011: Advances in 3D Geo-Information Sciences, ISBN 978-3-642-12669-7 Series Editors: Cartwright, W., Gartner, G., Meng, L., Peterson, M.P., ISSN: 1863-2246 5th International 3D GeoInfo Conference, November 3-4, 2010, Berlin, Germany 1 2 I. Hijazi, M. Ehlers, S. Zlatanova, T. Becker, L.Berlo", "title": "" }, { "docid": "6dfc558d273ec99ffa7dc638912d272c", "text": "Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.", "title": "" }, { "docid": "90738b84c4db0a267c7213c923368e6a", "text": "Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks.", "title": "" }, { "docid": "d00df5e0c5990c05d5a67e311586a68a", "text": "The present research explored the controversial link between global self-esteem and externalizing problems such as aggression, antisocial behavior, and delinquency. In three studies, we found a robust relation between low self-esteem and externalizing problems. This relation held for measures of self-esteem and externalizing problems based on self-report, teachers' ratings, and parents' ratings, and for participants from different nationalities (United States and New Zealand) and age groups (adolescents and college students). Moreover, this relation held both cross-sectionally and longitudinally and after controlling for potential confounding variables such as supportive parenting, parent-child and peer relationships, achievement-test scores, socioeconomic status, and IQ. In addition, the effect of self-esteem on aggression was independent of narcissism, an important finding given recent claims that individuals who are narcissistic, not low in self-esteem, are aggressive. Discussion focuses on clarifying the relations among self-esteem, narcissism, and externalizing problems.", "title": "" }, { "docid": "7a055093ac92c7d2fa7aa8dcbe47a8b8", "text": "In this paper, we present the design process of a smart bracelet that aims at enhancing the life of elderly people. The bracelet acts as a personal assistant during the user's everyday life, monitoring the health status and alerting him or her about abnormal conditions, reminding medications and facilitating the everyday life in many outdoor and indoor activities.", "title": "" }, { "docid": "50c3e7855f8a654571a62a094a86c4eb", "text": "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.", "title": "" }, { "docid": "bf46f77a03bd6915145bee472bde6525", "text": "©2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. DOI: 10.1109/IJCNN.2018.8489656 Abstract—Recurrent neural networks are now the state-ofthe-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.", "title": "" }, { "docid": "80d8a8c09e9918981d1a93e5bccf45ba", "text": "In this paper, we study a multi-residential electricity load scheduling problem with multi-class appliances in smart grid. Compared with the previous works in which only limited types of appliances are considered or only single residence grids are considered, we model the grid system more practically with jointly considering multi-residence and multi-class appliance. We formulate an optimization problem to maximize the sum of the overall satisfaction levels of residences which is defined as the sum of utilities of the residential customers minus the total cost for energy consumption. Then, we provide an electricity load scheduling algorithm by using a PL-Generalized Benders Algorithm which operates in a distributed manner while protecting the private information of the residences. By applying the algorithm, we can obtain the near-optimal load scheduling for each residence, which is shown to be very close to the optimal scheduling, and also obtain the lower and upper bounds on the optimal sum of the overall satisfaction levels of all residences, which are shown to be very tight.", "title": "" }, { "docid": "9c38fcfcbfeaf0072e723bd7e1e7d17d", "text": "BACKGROUND\nAllicin (diallylthiosulfinate) is the major volatile- and antimicrobial substance produced by garlic cells upon wounding. We tested the hypothesis that allicin affects membrane function and investigated 1) betanine pigment leakage from beetroot (Beta vulgaris) tissue, 2) the semipermeability of the vacuolar membrane of Rhoeo discolor cells, 3) the electrophysiology of plasmalemma and tonoplast of Chara corallina and 4) electrical conductivity of artificial lipid bilayers.\n\n\nMETHODS\nGarlic juice and chemically synthesized allicin were used and betanine loss into the medium was monitored spectrophotometrically. Rhoeo cells were studied microscopically and Chara- and artificial membranes were patch clamped.\n\n\nRESULTS\nBeet cell membranes were approximately 200-fold more sensitive to allicin on a mol-for-mol basis than to dimethyl sulfoxide (DMSO) and approximately 400-fold more sensitive to allicin than to ethanol. Allicin-treated Rhoeo discolor cells lost the ability to plasmolyse in an osmoticum, confirming that their membranes had lost semipermeability after allicin treatment. Furthermore, allicin and garlic juice diluted in artificial pond water caused an immediate strong depolarization, and a decrease in membrane resistance at the plasmalemma of Chara, and caused pore formation in the tonoplast and artificial lipid bilayers.\n\n\nCONCLUSIONS\nAllicin increases the permeability of membranes.\n\n\nGENERAL SIGNIFICANCE\nSince garlic is a common foodstuff the physiological effects of its constituents are important. Allicin's ability to permeabilize cell membranes may contribute to its antimicrobial activity independently of its activity as a thiol reagent.", "title": "" }, { "docid": "b3112fd3f8bfb5e4a235e17287a2ed50", "text": "The growing complexity of processes in many organizations stimulates the adoption of business process analysis techniques. Typically, such techniques are based on process models and assume that the operational processes in reality conform to these models. However, experience shows that reality often deviates from hand-made models. Therefore, the problem of checking to what extent the operational process conforms to the process model is important for process management, process improvement, and compliance. In this paper, we present a robust replay analysis technique that is able to measure the conformance of an event log for a given process model. The approach quantifies conformance and provides intuitive diagnostics (skipped and inserted activities). Our technique has been implemented in the ProM 6framework. Comparative evaluations show that the approach overcomes many of the limitations of existing conformance checking techniques.", "title": "" }, { "docid": "4e23bf1c89373abaf5dc096f76c893f3", "text": "Clock and data recovery (CDR) circuit plays a vital role for wired serial link communication in multi mode based system on chip (SOC). In wire linked communication systems, when data flows without any accompanying clock over a single wire, the receiver of the system is required to recover this data synchronously without losing the information. Therefore there exists a need for CDR circuits in the receiver of the system for recovering the clock or timing information from these data. The existing Octa-rate CDR circuit is not compatible to real time data, such a data is unpredictable, non periodic and has different arrival times and phase widths. Thus the proposed PRN based Octa-rate Clock and Data Recovery circuit is made compatible to real time data by introducing a Random Sequence Generator. The proposed PRN based Octa-rate Clock and Data Recovery circuit consists of PRN Sequence Generator, 16-Phase Generator, Early Late Phase Detector and Delay Line Controller. The FSM based Delay Line Controller controls the delay length and introduces the required delay in the input data. The PRN based Octa-rate CDR circuit has been realized using Xilinx ISE 13.2 and implemented on Vertex-5 FPGA target device for real time verification. The delay between the input and the generation of output is measured and analyzed using Logic Analyzer AGILENT 1962 A.", "title": "" }, { "docid": "f955d211ee27ac428e54116667913975", "text": "The authors are collaborating with a manufacturer of custom built steel frame modular units which are then transported for rapid erection onsite (volumetric building system). As part of its strategy to develop modular housing, Enemetric, is taking the opportunity to develop intelligent buildings, integrating a wide range of sensors and control systems for optimising energy efficiency and directly monitoring structural health. Enemetric have recently been embracing Building Information Modeling (BIM) to improve workflow, in particular cost estimation and to simplify computer aided manufacture (CAM). By leveraging the existing data generated during the design phases, and projecting it to all other aspects of construction management, less errors are made and productivity is significantly increased. Enemetric may work on several buildings at once, and scheduling and priorities become especially important for effective workflow, and implementing Enterprise Resource Planning (ERP). The parametric nature of BIM is also very useful for improving building management, whereby real-time data collection can be logically associated with individual components of the BIM stored in a local Building Management System performing structural health monitoring and environmental monitoring and control. BIM reuse can be further employed in building simulation tools, to apply simulation assisted control strategies, in order to reduce energy consumption, and increase occupant comfort. BIM Integrated Workflow Management and Monitoring System for Modular Buildings", "title": "" }, { "docid": "35eeb2dc882ccc6d97db1d20683fdbe6", "text": "We address the problem of learning vector representations for entities and relations in Knowledge Graphs (KGs) for Knowledge Base Completion (KBC). This problem has received significant attention in the past few years and multiple methods have been proposed. Most of the existing methods in the literature use a predefined characteristic scoring function for evaluating the correctness of KG triples. These scoring functions distinguish correct triples (high score) from incorrect ones (low score). However, their performance vary across different datasets. In this work, we demonstrate that a simple neural network based score function can consistently achieve near start-of-the-art performance on multiple datasets. We also quantitatively demonstrate biases in standard benchmark datasets, and highlight the need to perform evaluation spanning various datasets.", "title": "" }, { "docid": "70a88dbe6952958c2f6ff27c417e2a8e", "text": "Active cameras provide a navigating vehicle with the ability to fixate and track features over extended periods of time, and wide fields of view. While it is relatively straightforward to apply fixating vision to tactical, short-term navigation tasks, using serial fixation on a succession of features to provide global information for strategic navigation is more involved. However, active vision is seemingly well-suited to this task: the ability to measure features over such a wide range means that the same ones can be used as a robot makes a wide range of movements. This has advantages for map-building and localisation. The core work of this thesis concerns simultaneous localisation and map-building for a robot with a stereo active head, operating in an unknown environment and using point features in the world as visual landmarks. Importance has been attached to producing maps which are useful for extended periods of navigation. Many map-building methods fail on extended runs because they do not have the ability to recognise previously visited areas as such and adjust their maps accordingly. With active cameras, it really is possible to re-detect features in an area previously visited, even if the area is not passed through along the original trajectory. Maintaining a large, consistent map requires detailed information to be stored about features and their relationships. This information is computationally expensive to maintain, but a sparse map of landmark features can be handled successfully. We also present a method which can dramatically increase the efficiency of updates in the case that repeated measurements are made of a single feature, permitting continuous real-time tracking of features irrespective of the total map size. Active sensing requires decisions to be made about where resources can best be applied. A strategy is developed for serially fixating on different features during navigation, making the measurements where most information will be gained to improve map and localisation estimates. A useful map is automatically maintained by adding and deleting features to and from the map when necessary. What sort of tasks should an autonomous robot be able to perform? In most applications, there will be at least some prior information or commands governing the required motion and we will look at how this information can be incorporated with map-building techniques designed for unknown environments. We will make the distinction between position-based navigation, and so-called context-based navigation, where a robot manoeuvres with respect to locally observed parts of the surroundings. A fully automatic, real-time implementation of the ideas developed is presented, and a variety of detailed and extended experiments in a realistic environment are used to evaluate algorithms and make ground-truth comparisons. 1", "title": "" }, { "docid": "486978346e7a77f66e3ccce6f07fb346", "text": "In this paper, we present a novel structure, Semi-AutoEncoder, based on AutoEncoder. We generalize it into a hybrid collaborative filtering model for rating prediction as well as personalized top-n recommendations. Experimental results on two real-world datasets demonstrate its state-of-the-art performances.", "title": "" }, { "docid": "ab0994331a2074fe9b635342fed7331c", "text": "This paper investigates to identify the requirement and the development of machine learning-based mobile big data analysis through discussing the insights of challenges in the mobile big data (MBD). Furthermore, it reviews the state-of-the-art applications of data analysis in the area of MBD. Firstly, we introduce the development of MBD. Secondly, the frequently adopted methods of data analysis are reviewed. Three typical applications of MBD analysis, namely wireless channel modeling, human online and offline behavior analysis, and speech recognition in the internet of vehicles, are introduced respectively. Finally, we summarize the main challenges and future development directions of mobile big data analysis.", "title": "" }, { "docid": "89cc39369eeb6c12a12c61e210c437e3", "text": "Multimodal learning with deep Boltzmann machines (DBMs) is an generative approach to fuse multimodal inputs, and can learn the shared representation via Contrastive Divergence (CD) for classification and information retrieval tasks. However, it is a 2-fan DBM model, and cannot effectively handle multiple prediction tasks. Moreover, this model cannot recover the hidden representations well by sampling from the conditional distribution when more than one modalities are missing. In this paper, we propose a Kfan deep structure model, which can handle the multi-input and muti-output learning problems effectively. In particular, the deep structure has K-branch for different inputs where each branch can be composed of a multi-layer deep model, and a shared representation is learned in an discriminative manner to tackle multimodal tasks. Given the deep structure, we propose two objective functions to handle two multi-input and multi-output tasks: joint visual restoration and labeling, and the multi-view multi-calss object recognition tasks. To estimate the model parameters, we initialize the deep model parameters with CD to maximize the joint distribution, and then we use backpropagation to update the model according to specific objective function. The experimental results demonstrate that the model can effectively leverages multi-source information and predict multiple tasks well over competitive baselines.", "title": "" } ]
scidocsrr
244f19e37a8cdaeba09b9581f772e37d
Workload Management in Dynamic IT Service Delivery Organizations
[ { "docid": "254a84aae5d06ae652996535027e282c", "text": "Change management is a process by which IT systems are modified to accommodate considerations such as software fixes, hardware upgrades and performance enhancements. This paper discusses the CHAMPS system, a prototype under development at IBM Research for Change Management with Planning and Scheduling. The CHAMPS system is able to achieve a very high degree of parallelism for a set of tasks by exploiting detailed factual knowledge about the structure of a distributed system from dependency information at runtime. In contrast, today's systems expect an administrator to provide such insights, which is often not the case. Furthermore, the optimization techniques we employ allow the CHAMPS system to come up with a very high quality solution for a mathematically intractable problem in a time which scales nicely with the problem size. We have implemented the CHAMPS system and have applied it in a TPC-W environment that implements an on-line book store application.", "title": "" }, { "docid": "b45f832faf2816d456afa25a3641ffe9", "text": "This book is about feedback control of computing systems. The main idea of feedback control is to use measurements of a system’s outputs, such as response times, throughputs, and utilizations, to achieve externally specified goals. This is done by adjusting the system control inputs, such as parameters that affect buffer sizes, scheduling policies, and concurrency levels. Since the measured outputs are used to determine the control inputs, and the inputs then affect the outputs, the architecture is called feedback or closed loop. Almost any system that is considered automatic has some element of feedback control. In this book we focus on the closed-loop control of computing systems and methods for their analysis and design.", "title": "" } ]
[ { "docid": "7ec12c0bf639c76393954baae196a941", "text": "Honeynets have now become a standard part of security measures within the organization. Their purpose is to protect critical information systems and information; this is complemented by acquisition of information about the network threats, attackers and attacks. It is very important to consider issues affecting the deployment and usage of the honeypots and honeynets. This paper discusses the legal issues of honeynets considering their generations. Paper focuses on legal issues of core elements of honeynets, especially data control, data capture and data collection. Paper also draws attention on the issues pertaining to privacy and liability. The analysis of legal issues is based on EU law and it is supplemented by a review of the research literature, related to legal aspects of honeypots and honeynets.", "title": "" }, { "docid": "376f28143deecc7b95fe45d54dd16bb6", "text": "We investigate the problem of lung nodule malignancy suspiciousness (the likelihood of nodule malignancy) classification using thoracic Computed Tomography (CT) images. Unlike traditional studies primarily relying on cautious nodule segmentation and time-consuming feature extraction, we tackle a more challenging task on directly modeling raw nodule patches and building an end-to-end machinelearning architecture for classifying lung nodule malignancy suspiciousness. We present a Multi-crop Convolutional Neural Network (MC-CNN) to automatically extract nodule salient information by employing a novel multi-crop pooling strategy which crops different regions from convolutional feature maps and then applies max-pooling different times. Extensive experimental results show that the proposed method not only achieves state-of-the-art nodule suspiciousness classification performance, but also effectively characterizes nodule semantic attributes (subtlety and margin) and nodule diameter which are potentially helpful in modeling nodule malignancy. & 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "05f3d2097efffb3e1adcbede16ec41d2", "text": "BACKGROUND\nDialysis patients with uraemic pruritus (UP) have significantly impaired quality of life. To assess the therapeutic effect of UP treatments, a well-validated comprehensive and multidimensional instrument needed to be established.\n\n\nOBJECTIVES\nTo develop and validate a multidimensional scale assessing UP in patients on dialysis: the Uraemic Pruritus in Dialysis Patients (UP-Dial).\n\n\nMETHODS\nThe development and validation of the UP-Dial instrument were conducted in four phases: (i) item generation, (ii) development of a pilot questionnaire, (iii) refinement of the questionnaire with patient recruitment and (iv) psychometric validation. Participants completed the UP-Dial, the visual analogue scale (VAS) of UP, the Dermatology Life Quality Index (DLQI), the Kidney Disease Quality of Life-36 (KDQOL-36), the Pittsburgh Sleep Quality Index (PSQI) and the Beck Depression Inventory (BDI) between 15 May 2012 and 30 November 2015.\n\n\nRESULTS\nThe 27-item pilot UP-Dial was generated, with 168 participants completing the pilot scale. After factor analysis was performed, the final 14-item UP-Dial encompassed three domains: signs and symptoms, psychosocial, and sleep. Face and content validity were satisfied through the item generation process and expert review. Psychometric analysis demonstrated that the UP-Dial had good convergent and discriminant validity. The UP-Dial was significantly correlated [Spearman rank coefficient, 95% confidence interval (CI)] with the VAS-UP (0·76, 0·69-0·83), DLQI (0·78, 0·71-0·85), KDQOL-36 (-0·86, -0·91 to -0·81), PSQI (0·85, 0·80-0·89) and BDI (0·70, 0·61-0·79). The UP-Dial revealed excellent internal consistency (Cronbach's α 0·90, 95% CI 0·87-0·92) and reproducibility (intraclass correlation 0·95, 95% CI 0·90-0·98).\n\n\nCONCLUSIONS\nThe UP-Dial is valid and reliable for assessing UP among patients on dialysis. Future research should focus on the cross-cultural adaptation and translation of the scale to other languages.", "title": "" }, { "docid": "305efd1823009fe79c9f8ff52ddb5724", "text": "We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256.", "title": "" }, { "docid": "1fc965670f71d9870a4eea93d129e285", "text": "The present study investigates the impact of the experience of role playing a violent character in a video game on attitudes towards violent crimes and criminals. People who played the violent game were found to be more acceptable of crimes and criminals compared to people who did not play the violent game. More importantly, interaction effects were found such that people were more acceptable of crimes and criminals outside the game if the criminals were matched with the role they played in the game and the criminal actions were similar to the activities they perpetrated during the game. The results indicate that people’s virtual experience through role-playing games can influence their attitudes and judgments of similar real-life crimes, especially if the crimes are similar to what they conducted while playing games. Theoretical and practical implications are discussed. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "def650b2d565f88a6404997e9e93d34f", "text": "Quality uncertainty and high search costs for identifying relevant information from an ocean of information may prevent customers from making purchases. Recognizing potential negative impacts of this search cost for quality information and relevant information, firms began to invest in creating a virtual community that enables consumers to share their opinions and experiences to reduce quality uncertainty, and in developing recommendation systems that help customers identify goods in which they might have an interest. However, not much is known regarding the effectiveness of these efforts. In this paper, we empirically investigate the impacts of recommendations and consumer feedbacks on sales based on data gathered from Amazon.com. Our results indicate that more recommendations indeed improve sales at Amazon.com; however, consumer ratings are not found to be related to sales. On the other hand, number of consumer reviews is positively associated with sales. We also find that recommendations work better for less-popular books than for more-popular books. This is consistent with the search cost argument: a consumer’s search cost for less-popular books may be higher, and thus they may rely more on recommendations to locate a product of interest.", "title": "" }, { "docid": "e6d309d24e7773d7fc78c3ebeb926ba0", "text": "INTRODUCTION\nLiver disease is the third most common cause of premature mortality in the UK. Liver failure accelerates frailty, resulting in skeletal muscle atrophy, functional decline and an associated risk of liver transplant waiting list mortality. However, there is limited research investigating the impact of exercise on patient outcomes pre and post liver transplantation. The waitlist period for patients listed for liver transplantation provides a unique opportunity to provide and assess interventions such as prehabilitation.\n\n\nMETHODS AND ANALYSIS\nThis study is a phase I observational study evaluating the feasibility of conducting a randomised control trial (RCT) investigating the use of a home-based exercise programme (HBEP) in the management of patients awaiting liver transplantation. Twenty eligible patients will be randomly selected from the Queen Elizabeth University Hospital Birmingham liver transplant waiting list. Participants will be provided with an individually tailored 12-week HBEP, including step targets and resistance exercises. Activity trackers and patient diaries will be provided to support data collection. For the initial 6 weeks, telephone support will be given to discuss compliance with the study intervention, achievement of weekly targets, and to address any queries or concerns regarding the intervention. During weeks 6-12, participants will continue the intervention without telephone support to evaluate longer term adherence to the study intervention. On completing the intervention, all participants will be invited to engage in a focus group to discuss their experiences and the feasibility of an RCT.\n\n\nETHICS AND DISSEMINATION\nThe protocol is approved by the National Research Ethics Service Committee North West - Greater Manchester East and Health Research Authority (REC reference: 17/NW/0120). Recruitment into the study started in April 2017 and ended in July 2017. Follow-up of participants is ongoing and due to finish by the end of 2017. The findings of this study will be disseminated through peer-reviewed publications and international presentations. In addition, the protocol will be placed on the British Liver Trust website for public access.\n\n\nTRIAL REGISTRATION NUMBER\nNCT02949505; Pre-results.", "title": "" }, { "docid": "712a4bdb5b285f3ef52218096ec3a4bf", "text": "We describe the relations between active maintenance of the hand at various positions in a two-dimensional space and the frequency of single cell discharge in motor cortex (n = 185) and area 5 (n = 128) of the rhesus monkey. The steady-state discharge rate of 124/185 (67%) motor cortical and 105/128 (82%) area 5 cells varied with the position in which the hand was held in space (“static spatial effect”). The higher prevalence of this effect in area 5 was statistically significant. In both structures, static effects were observed at similar frequencies for cells that possessed as well as for those that lacked passive driving from the limb. The results obtained by a quantitative analysis were similar for neurons of the two cortical areas studied. It was found that of the neurons with a static effect, the steady-state discharge rate of 78/124 (63%) motor cortical and 63/105 (60%) area 5 cells was a linear function of the position of the hand across the two-dimensional space, so that the neuronal “response surface” was adequately described by a plane (R2 ≥ 0.7, p < 0.05, F-test in analysis of variance). The preferred orientations of these response planes differed for different cells. These results indicate that individual cells in these areas do not relate uniquely a particular position of the hand in space. Instead, they seem to encode spatial gradients at certain orientations. A unique relation to position in space could be signalled by the whole population of these neurons, considered as an ensemble. This remains to be elucidated. Finally, the similarity of the quantitative relations observed in motor cortex and area 5 suggests that these structures may process spatial information in a similar way.", "title": "" }, { "docid": "7c19a963cd3ad7119278744e73c1c27a", "text": "This work presents a study of three important issues of the color pixel classification approach to skin segmentation: color representation, color quantization, and classification algorithm. Our analysis of several representative color spaces using the Bayesian classifier with the histogram technique shows that skin segmentation based on color pixel classification is largely unaffected by the choice of the color space. However, segmentation performance degrades when only chrominance channels are used in classification. Furthermore, we find that color quantization can be as low as 64 bins per channel, although higher histogram sizes give better segmentation performance. The Bayesian classifier with the histogram technique and the multilayer perceptron classifier are found to perform better compared to other tested classifiers, including three piecewise linear classifiers, three unimodal Gaussian classifiers, and a Gaussian mixture classifier.", "title": "" }, { "docid": "cdcbbe1e40a36974ac333912940718a7", "text": "Plant growth promoting rhizobacteria (PGPR) are beneficial bacteria which have the ability to colonize the roots and either promote plant growth through direct action or via biological control of plant diseases (Kloepper and Schroth 1978). They are associated with many plant species and are commonly present in varied environments. Strains with PGPR activity, belonging to genera Azoarcus, Azospirillum, Azotobacter, Arthrobacter, Bacillus, Clostridium, Enterobacter, Gluconacetobacter, Pseudomonas, and Serratia, have been reported (Hurek and Reinhold-Hurek 2003). Among these, species of Pseudomonas and Bacillus are the most extensively studied. These bacteria competitively colonize the roots of plant and can act as biofertilizers and/or antagonists (biopesticides) or simultaneously both. Diversified populations of aerobic endospore forming bacteria (AEFB), viz., species of Bacillus, occur in agricultural fields and contribute to crop productivity directly or indirectly. Physiological traits, such as multilayered cell wall, stress resistant endospore formation, and secretion of peptide antibiotics, peptide signal molecules, and extracellular enzymes, are ubiquitous to these bacilli and contribute to their survival under adverse environmental conditions for extended periods of time. Multiple species of Bacillus and Paenibacillus are known to promote plant growth. The principal mechanisms of growth promotion include production of growth stimulating phytohormones, solubilization and mobilization of phosphate, siderophore production, antibiosis, i.e., production of antibiotics, inhibition of plant ethylene synthesis, and induction of plant systemic resistance to pathogens (Richardson et al. 2009; Idris et al. 2007; Gutierrez-Manero et al. 2001;", "title": "" }, { "docid": "d51a844fa1ec4a63868611d73c6acfad", "text": "Massive open online courses (MOOCs) attract a large number of student registrations, but recent studies have shown that only a small fraction of these students complete their courses. Student dropouts are thus a major deterrent for the growth and success of MOOCs. We believe that understanding student engagement as a course progresses is essential for minimizing dropout rates. Formally defining student engagement in an online setting is challenging. In this paper, we leverage activity (such as posting in discussion forums, timely submission of assignments, etc.), linguistic features from forum content and structural features from forum interaction to identify two different forms of student engagement (passive and active) in MOOCs. We use probabilistic soft logic (PSL) to model student engagement by capturing domain knowledge about student interactions and performance. We test our models on MOOC data from Coursera and demonstrate that modeling engagement is helpful in predicting student performance.", "title": "" }, { "docid": "bc05c9cafade197494b52cf3f2ff091b", "text": "Modern software systems are increasingly requested to be adaptive to changes in the environment in which they are embedded. Moreover, adaptation often needs to be performed automatically, through self-managed reactions enacted by the application at run time. Off-line, human-driven changes should be requested only if self-adaptation cannot be achieved successfully. To support this kind of autonomic behavior, software systems must be empowered by a rich run-time support that can monitor the relevant phenomena of the surrounding environment to detect changes, analyze the data collected to understand the possible consequences of changes, reason about the ability of the application to continue to provide the required service, and finally react if an adaptation is needed. This paper focuses on non-functional requirements, which constitute an essential component of the quality that modern software systems need to exhibit. Although the proposed approach is quite general, it is mainly exemplified in the paper in the context of service-oriented systems, where the quality of service (QoS) is regulated by contractual obligations between the application provider and its clients. We analyze the case where an application, exported as a service, is built as a composition of other services. Non-functional requirements—such as reliability and performance—heavily depend on the environment in which the application is embedded. Thus changes in the environment may ultimately adversely affect QoS satisfaction. We illustrate an approach and support tools that enable a holistic view of the design and run-time management of adaptive software systems. The approach is based on formal (probabilistic) models that are used at design time to reason about dependability of the application in quantitative terms. Models continue to exist at run time to enable continuous verification and detection of changes that require adaptation.", "title": "" }, { "docid": "1baaed4083a1a8315f8d5cd73730c81e", "text": "While perception tasks such as visual object recognition and text understanding play an important role in human intelligence, the subsequent tasks that involve inference, reasoning and planning require an even higher level of intelligence. The past few years have seen major advances in many perception tasks using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. To achieve integrated intelligence that involves both perception and inference, it is naturally desirable to tightly integrate deep learning and Bayesian models within a principled probabilistic framework, which we call Bayesian deep learning. In this unified framework, the perception of text or images using deep learning can boost the performance of higher-level inference and in return, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a general introduction to Bayesian deep learning and reviews its recent applications on recommender systems, topic models, and control. In this survey, we also discuss the relationship and differences between Bayesian deep learning and other related topics like Bayesian treatment of neural networks.", "title": "" }, { "docid": "85f5833628a4b50084fa50cbe45ebe4d", "text": "We introduce a functional gradient descent trajectory optimization algorithm for robot motion planning in Reproducing Kernel Hilbert Spaces (RKHSs). Functional gradient algorithms are a popular choice for motion planning in complex many-degree-of-freedom robots, since they (in theory) work by directly optimizing within a space of continuous trajectories to avoid obstacles while maintaining geometric properties such as smoothness. However, in practice, implementations such as CHOMP and TrajOpt typically commit to a fixed, finite parametrization of trajectories, often as a sequence of waypoints. Such a parameterization can lose much of the benefit of reasoning in a continuous trajectory space: e.g., it can require taking an inconveniently small step size and large number of iterations to maintain smoothness. Our work generalizes functional gradient trajectory optimization by formulating it as minimization of a cost functional in an RKHS. This generalization lets us represent trajectories as linear combinations of kernel functions. As a result, we are able to take larger steps and achieve a locally optimal trajectory in just a few iterations. Depending on the selection of kernel, we can directly optimize in spaces of trajectories that are inherently smooth in velocity, jerk, curvature, etc., and that have a low-dimensional, adaptively chosen parameterization. Our experiments illustrate the effectiveness of the planner for different kernels, including Gaussian RBFs with independent and coupled interactions among robot joints, Laplacian RBFs, and B-splines, as compared to the standard discretized waypoint representation.", "title": "" }, { "docid": "fda37e6103f816d4933a3a9c7dee3089", "text": "This paper introduces a novel approach to estimate the systolic and diastolic blood pressure ratios (SBPR and DBPR) based on the maximum amplitude algorithm (MAA) using a Gaussian mixture regression (GMR). The relevant features, which clearly discriminate the SBPR and DBPR according to the targeted groups, are selected in a feature vector. The selected feature vector is then represented by the Gaussian mixture model. The SBPR and DBPR are subsequently obtained with the help of the GMR and then mapped back to SBP and DBP values that are more accurate than those obtained with the conventional MAA method.", "title": "" }, { "docid": "2ee1f7a56eba17b75217cca609452f20", "text": "We describe the annotation of a new dataset for German Named Entity Recognition (NER). The need for this dataset is motivated by licensing issues and consistency issues of existing datasets. We describe our approach to creating annotation guidelines based on linguistic and semantic considerations, and how we iteratively refined and tested them in the early stages of annotation in order to arrive at the largest publicly available dataset for German NER, consisting of over 31,000 manually annotated sentences (over 591,000 tokens) from German Wikipedia and German online news. We provide a number of statistics on the dataset, which indicate its high quality, and discuss legal aspects of distributing the data as a compilation of citations. The data is released under the permissive CC-BY license, and will be fully available for download in September 2014 after it has been used for the GermEval 2014 shared task on NER. We further provide the full annotation guidelines and links to the annotation tool used for the creation of this resource.", "title": "" }, { "docid": "5fc9fe7bcc50aad948ebb32aefdb2689", "text": "This paper explores the use of set expansion (SE) to improve question answering (QA) when the expected answer is a list of entities belonging to a certain class. Given a small set of seeds, SE algorithms mine textual resources to produce an extended list including additional members of the class represented by the seeds. We explore the hypothesis that a noise-resistant SE algorithm can be used to extend candidate answers produced by a QA system and generate a new list of answers that is better than the original list produced by the QA system. We further introduce a hybrid approach which combines the original answers from the QA system with the output from the SE algorithm. Experimental results for several state-of-the-art QA systems show that the hybrid system performs better than the QA systems alone when tested on list question data from past TREC evaluations.", "title": "" }, { "docid": "ec5aac01866a1e4ca3f4e906990d5d8e", "text": "But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one orderof-magnitude improvement in productivity, in reliability, in simplicity. In this article, I shall try to show why, by examining both the nature of the software problem and the properties of the bullets proposed.", "title": "" }, { "docid": "960022742172d6d0e883a23c74d800ef", "text": "A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms.", "title": "" }, { "docid": "cfddb85a8c81cb5e370fe016ea8d4c5b", "text": "Negative (adverse or threatening) events evoke strong and rapid physiological, cognitive, emotional, and social responses. This mobilization of the organism is followed by physiological, cognitive, and behavioral responses that damp down, minimize, and even erase the impact of that event. This pattern of mobilization-minimization appears to be greater for negative events than for neutral or positive events. Theoretical accounts of this response pattern are reviewed. It is concluded that no single theoretical mechanism can explain the mobilization-minimization pattern, but that a family of integrated process models, encompassing different classes of responses, may account for this pattern of parallel but disparately caused effects.", "title": "" } ]
scidocsrr
009a4972275cc44fe7e4cc46b69d8a05
Employees' Information Security Awareness and Behavior: A Literature Review
[ { "docid": "b99b9f80b4f0ca4a8d42132af545be76", "text": "By: Catherine L. Anderson Decision, Operations, and Information Technologies Department Robert H. Smith School of Business University of Maryland Van Munching Hall College Park, MD 20742-1815 U.S.A. Catherine_Anderson@rhsmith.umd.edu Ritu Agarwal Center for Health Information and Decision Systems University of Maryland 4327 Van Munching Hall College Park, MD 20742-1815 U.S.A. ragarwal@rhsmith.umd.edu", "title": "" } ]
[ { "docid": "3fcce3664db5812689c121138e2af280", "text": "We examine and compare simulation-based algorithms for solving the agent scheduling problem in a multiskill call center. This problem consists in minimizing the total costs of agents under constraints on the expected service level per call type, per period, and aggregated. We propose a solution approach that combines simulation with integer or linear programming, with cut generation. In our numerical experiments with realistic problem instances, this approach performs better than all other methods proposed previously for this problem. We also show that the two-step approach, which is the standard method for solving this problem, sometimes yield solutions that are highly suboptimal and inferior to those obtained by our proposed method. 2009 Published by Elsevier B.V.", "title": "" }, { "docid": "de569abb181a993a6da91b7da0baf3cf", "text": "The field of image denoising is currently dominated by discriminative deep learning methods that are trained on pairs of noisy input and clean target images. Recently it has been shown that such methods can also be trained without clean targets. Instead, independent pairs of noisy images can be used, in an approach known as NOISE2NOISE (N2N). Here, we introduce NOISE2VOID (N2V), a training scheme that takes this idea one step further. It does not require noisy image pairs, nor clean target images. Consequently, N2V allows us to train directly on the body of data to be denoised and can therefore be applied when other methods cannot. Especially interesting is the application to biomedical image data, where the acquisition of training targets, clean or noisy, is frequently not possible. We compare the performance of N2V to approaches that have either clean target images and/or noisy image pairs available. Intuitively, N2V cannot be expected to outperform methods that have more information available during training. Still, we observe that the denoising performance of NOISE2VOID drops in moderation and compares favorably to training-free denoising methods.", "title": "" }, { "docid": "bd8470bab582c3742f5382831431ddb0", "text": "Roaming users who use untrusted machines to access password protected accounts have few good options. An internet café machine can easily be running a keylogger. The roaming user has no reliable way of determining whether it is safe, and has no alternative to typing the password. We describe a simple trick the user can employ that is entirely effective in concealing the password. We verify its efficacy against the most popular keylogging programs.", "title": "" }, { "docid": "d286afa5ef0e67904d78883080fe073a", "text": "As the cellular networks continue to progress between generations, the expectations of 5G systems are planned toward high-capacity communication links that can provide users access to numerous types of applications (e.g., augmented reality and holographic multimedia streaming). The demand for higher bandwidth has led the research community to investigate unexplored frequency spectrums, such as the terahertz band for 5G. However, this particular spectrum is strived with numerous challenges, which includes the need for line-of-sight (LoS) links as reflections will deflect the waves as well as molecular absorption that can affect the signal strength. This is further amplified when a high quality of service has to be maintained over infrastructure that supports mobility, as users (or groups of users) migrate between locations, requiring frequent handover for roaming. In this paper, the concept of mirror-assisted wireless coverage is introduced, where smart antennas are utilized with dielectric mirrors that act as reflectors for the terahertz waves. The objective is to utilize information such as the user's location and to direct the reflective beam toward the highest concentration of users. A multiray model is presented in order to develop the propagation models for both indoor and outdoor scenarios in order to validate the proposed use of the reflectors. An office and a pedestrian-walking scenarios are used for indoor and outdoor scenarios, respectively. The results from the simulation work show an improvement with the usage of mirror-assisted wireless coverage, improving the overall capacity, the received power, the path loss, and the probability of LoS.", "title": "" }, { "docid": "df04a11d82e8ccf8ea5af180f77bc5f3", "text": "More and more cities are looking for service providers able to deliver 3D city models in a short time. Airborne laser scanning techniques make it possible to acquire a three-dimensional point cloud leading almost instantaneously to digital surface models (DSM), but these models are far from a topological 3D model needed by geographers or land surveyors. The aim of this paper is to present the pertinence and advantages of combining simultaneously the point cloud and the normalized DSM (nDSM) in the main steps of a building reconstruction approach. This approach has been implemented in order to exempt any additional data and to automate the process. The proposed workflow firstly extracts the off-terrain mask based on DSM. Then, it combines the point cloud and the DSM for extracting a building mask from the off-terrain. At last, based on the previously extracted building mask, the reconstruction of 3D flat roof models is carried out and analyzed.", "title": "" }, { "docid": "83a8e06926e25b256db367df6df6b3d9", "text": "The proposed System assists the sensor based mobile robot navigation in an indoor environment using Fuzzy logic controller. Fuzzy logic control is well suited for controlling a mobile robot because it is capable of making inferences even under uncertainty. It assists rules generation and decision-making. It uses set of linguistic Fuzzy rules to implement expert knowledge under various situations. A Fuzzy logic system is designed with two basic behaviors- obstacle avoidance and a target seeking behavior. The inputs to the Fuzzy logic controller are the desired direction of motion and the readings from the sensors. The outputs from the Fuzzy logic controller are the accelerations of robot wheels. Under the proposed Fuzzy model, a mobile robot avoids the obstacles and generates the path towards the target.", "title": "" }, { "docid": "338e037f4ec9f6215f48843b9d03f103", "text": "Sparse deep neural networks(DNNs) are efficient in both memory and compute when compared to dense DNNs. But due to irregularity in computation of sparse DNNs, their efficiencies are much lower than that of dense DNNs on general purpose hardwares. This leads to poor/no performance benefits for sparse DNNs. Performance issue for sparse DNNs can be alleviated by bringing structure to the sparsity and leveraging it for improving runtime efficiency. But such structural constraints often lead to sparse models with suboptimal accuracies. In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of neural networks called HBsNN (Hierarchical Block sparse Neural Networks).", "title": "" }, { "docid": "350d1717a5192873ef9e0ac9ed3efc7b", "text": "OBJECTIVE\nTo describe the effects of percutaneously implanted valve-in-valve in the tricuspid position for patients with pre-existing transvalvular device leads.\n\n\nMETHODS\nIn this case series, we describe implantation of the Melody valve and SAPIEN XT valve within dysfunctional bioprosthetic tricuspid valves in three patients with transvalvular device leads.\n\n\nRESULTS\nIn all cases, the valve was successfully deployed and device lead function remained unchanged. In 1/3 cases with 6-month follow-up, device lead parameters remain unchanged and transcatheter valve-in-valve function remains satisfactory.\n\n\nCONCLUSIONS\nTranscatheter tricuspid valve-in-valve is feasible in patients with pre-existing transvalvular devices leads. Further study is required to determine the long-term clinical implications of this treatment approach.", "title": "" }, { "docid": "7927dffe38cec1ce2eb27dbda644a670", "text": "This paper describes our system for SemEval-2010 Task 8 on multi-way classification of semantic relations between nominals. First, the type of semantic relation is classified. Then a relation typespecific classifier determines the relation direction. Classification is performed using SVM classifiers and a number of features that capture the context, semantic role affiliation, and possible pre-existing relations of the nominals. This approach achieved an F1 score of 82.19% and an accuracy of 77.92%.", "title": "" }, { "docid": "522ea0f60a2c010747bb90005f86cb91", "text": "The first part of this paper intends to give an overview of the maximum power point tracking methods for photovoltaic (PV) inverters presently reported in the literature. The most well-known and popular methods, like the perturb and observe (P&O), the incremental conductance (INC) and the constant voltage (CV), are presented. These methods, especially the P&O, have been treated by many works, which aim to overcome their shortcomings, either by optimizing the methods, or by combining them. In the second part of the paper an improvement for the P&O and INC method is proposed, which prevents these algorithms to get confused during rapidly changing irradiation conditions, and it considerably increases the efficiency of the MPPT", "title": "" }, { "docid": "c17240d9adc3720020adff6d7ab3b59f", "text": "class LifeCycleBindingMixin { String bindFc (...) { if (getFcState() != STOPPED) throw new Exception(); return _super_bindFc(...); } abstract String _super_bindFc (...);} class XYZ extends BasicBindingController { String bindFc (...) { if (getFcState() != STOPPED) throw new Exception(); return super.bindFc(...); } }  Result of of mixins composition depends on the order in which they are composed  Controllers are builts as composition of control classes and mixins T. Coupaye, LAFMI Summer School, Puebla, Mexico, August 2004 France Telecom R&D Division 37 Interceptors  Interceptors  Most control aspects have two parts  A generic part (a.k.a. “advice”)  A specific part based on interception of interactions between components (a.k.a. « hooks »)  Interceptors have to be inserted in functional (applicative) code  Interceptor classes are generated in bytecode form by an generator which relies on ASM  Interceptor class generator  G(class, interface(s), aspect code weaver(s)) -> subclass of class which implement interface(s) and aspect(s)  Transformations are composed (in the class) in the order aspects code weavers are given  Aspect code weaver  An object that can manipulate the bytecode of operations arbitrarily  Example:  Transformation of void m { return delegate.m }  Into void m { // pre code... try {delegate.m();} finally {//post code... }}  Configuration  Interceptors associated to a component are specified at component creation time  Julia comes with a library of code weavers:  life cycle, trace, reification of operation names, reification of operation names and arguments T. Coupaye, LAFMI Summer School, Puebla, Mexico, August 2004 France Telecom R&D Division 38 Life Cycle Management Approach based on invocation count  Interceptors behind all interfaces increment and decrement a counter in LifeCycle controller  LifeCycle controller waits for counters to be nil to stop the component (STARTED->STOPPED) when then component is in sate STOPPED, all activities (includind new incoming ones) are blocked  activities (and counter increment) are unblocked when the component is started again Composite components stop recursively  the primitive components in their content  and primitive client components of these components Because of inter-component optimization (detailed later)  Same algorithm with n counters  NB: needs to wait for n counters to be nil at the same time with a risk of livelock Limitations  Risk of livelock when waiting for n counters to be nil at the same time  No state management hence integrity is not fully guaranteed during reconfigurations T. Coupaye, LAFMI Summer School, Puebla, Mexico, August 2004 France Telecom R&D Division 39 Intra-component optimization  3 possibilities for memory optimization  Fusion of controller objects (left)  Fusion of controller objects and interceptors (middle) if interceptors do all delegate to the same object  Fusion of controllers and contents (right) for primitive components Merging is done in bytecode form by generating a class based on lexicographic patterns in concerned controller classes  weavableX for a required interface of type X in controller is replaced by this in the generated class  weavableOptY for a required interface of type Y is replaced by this or null in the generated class T. Coupaye, LAFMI Summer School, Puebla, Mexico, August 2004 France Telecom R&D Division 40 Inter-component Optimization Shortcut algorithm  Optimized links for performance (“shortcuts”) subtituted to implementation ( ) and delegate links ( ) in binding chains NB:  behaviour is hazardous if components exchange references directly (e.g. this) instead of always using the Fractal API  Shorcuts must be recomputed each time a binding is changed Initial path", "title": "" }, { "docid": "94aeb6dad00f174f89b709feab3db21f", "text": "We present a novel approach to the automatic acquisition of taxonomies or concept hierarchies from a text corpus. The approach is based on Formal Concept Analysis (FCA), a method mainly used for the analysis of data, i.e. for investigating and processing explicitly given information. We follow Harris’ distributional hypothesis and model the context of a certain term as a vector representing syntactic dependencies which are automatically acquired from the text corpus with a linguistic parser. On the basis of this context information, FCA produces a lattice that we convert into a special kind of partial order constituting a concept hierarchy. The approach is evaluated by comparing the resulting concept hierarchies with hand-crafted taxonomies for two domains: tourism and finance. We also directly compare our approach with hierarchical agglomerative clustering as well as with Bi-Section-KMeans as an instance of a divisive clustering algorithm. Furthermore, we investigate the impact of using different measures weighting the contribution of each attribute as well as of applying a particular smoothing technique to cope with data sparseness.", "title": "" }, { "docid": "de721f4b839b0816f551fa8f8ee2065e", "text": "This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional) lexical-semantics-driven log-linear model. Our model learns soft alignments as a hidden variable in discriminative training. Experimental results using the TREC dataset are shown to significantly outperform strong state-of-the-art baselines.", "title": "" }, { "docid": "3eff4654a3bbf9aa3fbfe15033383e67", "text": "Pizza is a strict superset of Java that incorporates three ideas from the academic community: parametric polymorphism, higher-order functions, and algebraic data types. Pizza is defined by translation into Java and compiles into the Java Virtual Machine, requirements which strongly constrain the design space. Nonetheless, Pizza fits smoothly to Java, with only a few rough edges.", "title": "" }, { "docid": "b71477154243283819d499c381119c2d", "text": "Indonesia is one of countries well-known as the biggest palm oil producers in the world. In 2015, this country succeeded to produce 32.5 million tons of palm oil, and used 26.4 million of it to export to other countries. The quality of Indonesia's palm oil production has become the reason why Indonesia becomes the famous exporter in a global market. For this reason, many Indonesian palm oil companies are trying to improve their quality through smart farming. One of the ways to improve is by using technology such as Internet of Things (IoT). In order to have the actual and real-time condition of the land, using the IoT concept by connecting some sensors. A previous research has accomplished to create some Application Programming Interfaces (API), which can be used to support the use of technology. However, these APIs have not been integrated to a User Interface (UI), as it can only be used by developers or programmers. These APIs have not been able to be used as a monitoring information system for palm oil plantation, which can be understood by the employees. Based on those problems, this research attempts to develop a monitoring information system, which will be integrated with the APIs from the previous research by using the Progressive Web App (PWA) approach. So, this monitoring information system can be accessed by the employees, either by using smartphone or by using desktop. Even, it can work similar with a native application.", "title": "" }, { "docid": "590931691f16239904733befab24e70a", "text": "In a neural network, neuron computation is achieved through the summation of input signals fed by synaptic connections. The synaptic activity (weight) is dictated by the synchronous firing of neurons, inducing potentiation/depression of the synaptic connection. This learning function can be supported by the resistive switching memory (RRAM), which changes its resistance depending on the amplitude, the pulse width and the bias polarity of the applied signal. This work shows a new synapse circuit comprising a MOS transistor as a selector and a RRAM as a variable resistance, displaying spike-timing dependent plasticity (STDP) similar to the one originally experienced in biological neural networks. We demonstrate long-term potentiation and long-term depression by simulations with an analytical model of resistive switching. Finally, the experimental demonstration of the new STDP scheme is presented.", "title": "" }, { "docid": "3c6dcd92cbbf0cf4a5175dc61b401aae", "text": "Increased number of malware samples have created many challenges for Antivirus companies. One of these challenges is clustering the large number of malware samples they receive daily. Malware authors use malware generation kits to create different instances of the same malware. So most of these malicious samples are polymorphic instances of previously known malware family only. Clustering these large number of samples rapidly and accurately without spending much time on processing the sample have become a critical requirement. In this paper we proposed, implemented and evaluated a method, called ByteFreq that can cluster large number of samples using byte frequency. Byte frequency is represented as time series and SAX (Symbolic Aggregation approXimation)[1] is used to convert the time series in symbolic representation. We evaluated proposed system on real world malware samples and achieved 0.92 precision and 0.96 recall accuracy.", "title": "" }, { "docid": "53ab91cdff51925141c43c4bc1c6aade", "text": "Floods are the most common natural disasters, and cause significant damage to life, agriculture and economy. Research has moved on from mathematical modeling or physical parameter based flood forecasting schemes, to methodologies focused around algorithmic approaches. The Internet of Things (IoT) is a field of applied electronics and computer science where a system of devices collects data in real time and transfers it through a Wireless Sensor Network (WSN) to the computing device for analysis. IoT generally combines embedded system hardware techniques along with data science or machine learning models. In this work, an IoT and machine learning based embedded system is proposed to predict the probability of floods in a river basin. The model uses a modified mesh network connection over ZigBee for the WSN to collect data, and a GPRS module to send the data over the internet. The data sets are evaluated using an artificial neural network model. The results of the analysis which are also appended show a considerable improvement over the currently existing methods.", "title": "" }, { "docid": "b9d25bdbb337a9d16a24fa731b6b479d", "text": "The implementation of effective strategies to manage leaks represents an essential goal for all utilities involved with drinking water supply in order to reduce water losses affecting urban distribution networks. This study concerns the early detection of leaks occurring in small-diameter customers’ connections to water supply networks. An experimental campaign was carried out in a test bed to investigate the sensitivity of Acoustic Emission (AE) monitoring to water leaks. Damages were artificially induced on a polyethylene pipe (length 28 m, outer diameter 32 mm) at different distances from an AE transducer. Measurements were performed in both unburied and buried pipe conditions. The analysis permitted the identification of a clear correlation between three monitored parameters (namely total Hits, Cumulative Counts and Cumulative Amplitude) and the characteristics of the examined leaks.", "title": "" }, { "docid": "6ec4c9e6b3e2a9fd4da3663a5b21abcd", "text": "In order to ensure the service quality, modern Internet Service Providers (ISPs) invest tremendously on their network monitoring and measurement infrastructure. Vast amount of network data, including device logs, alarms, and active/passive performance measurement across different network protocols and layers, are collected and stored for analysis. As network measurement grows in scale and sophistication, it becomes increasingly challenging to effectively “search” for the relevant information that best support the needs of network operations. In this paper, we look into techniques that have been widely applied in the information retrieval and search engine domain and explore their applicability in network management domain. We observe that unlike the textural information on the Internet, network data are typically annotated with time and location information, which can be further augmented using information based on network topology, protocol and service dependency. We design NetSearch, a system that pre-processes various network data sources on data ingestion, constructs index that matches both the network spatial hierarchy model and the inherent timing/textual information contained in the data, and efficiently retrieves the relevant information that network operators search for. Through case study, we demonstrate that NetSearch is an important capability for many critical network management functions such as complex impact analysis.", "title": "" } ]
scidocsrr
d98a66e5784413ede737ed404d1bb790
Wikipedia in the eyes of its beholders: A systematic review of scholarly research on Wikipedia readers and readership
[ { "docid": "ee08bd4b35b875bd9c12b6707406fdde", "text": "I here give an overview of Wikipedia and wiki research and tools. Well over 1,000 reports have been published in the field and there exist dedicated scientific meetings for Wikipedia research. It is not possible to give a complete review of all material published. This overview serves to describe some key areas of research.", "title": "" } ]
[ { "docid": "cdeaf14d18c32ca534e8e76b9025db42", "text": "A broadband dual-polarized base station antenna with sturdy construction is presented in this letter. The antenna mainly contains four parts: main radiator, feeding baluns, bedframe, and reflector. First, two orthogonal dipoles are etched on a substrate as main radiator forming dual polarization. Two baluns are then introduced to excite the printed dipoles. Each balun has four bumps on the edges for electrical connection and fixation. The bedframe is designed to facilitate the installation, and the reflector is finally used to gain unidirectional radiation. Measured results show that the antenna has a 48% impedance bandwidth with reflection coefficient less than –15 dB and port isolation more than 22 dB. A four-element antenna array with 6° ± 2° electrical down tilt is also investigated for wideband base station application. The antenna and its array have the advantages of sturdy construction, high machining accuracy, ease of integration, and low cost. They can be used for broadband base station in the next-generation wireless communication system.", "title": "" }, { "docid": "541055772a5c2bed70649d2ca9a6c584", "text": "This report discusses methods for forecasting hourly loads of a US utility as part of the load forecasting track of the Global Energy Forecasting Competition 2012 hosted on Kaggle. The methods described (gradient boosting machines and Gaussian processes) are generic machine learning / regression algorithms and few domain specific adjustments were made. Despite this, the algorithms were able to produce highly competitive predictions and hopefully they can inspire more refined techniques to compete with state-of-the-art load forecasting methodologies.", "title": "" }, { "docid": "e0ec22fcdc92abe141aeb3fa67e9e55a", "text": "A mobile wireless infrastructure-less network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any preexisting network infrastructure or centralized administration. However, the battery life of these nodes is very limited, if their battery power is depleted fully, then this result in network partition, so these nodes becomes a critical spot in the network. These critical nodes can deplete their battery power earlier because of excessive load and processing for data forwarding. These unbalanced loads turn to increase the chances of nodes failure, network partition and reduce the route lifetime and route reliability of the MANETs. Due to this, energy consumption issue becomes a vital research topic in wireless infrastructure -less networks. The energy efficient routing is a most important design criterion for MANETs. This paper focuses of the routing approaches are based on the minimization of energy consum ption of individual nodes and many other ways. This paper surveys and classifies numerous energy-efficient routing mechanisms proposed for wireless infrastructure-less networks. Also presents detailed comparative study of lager number of energy efficient/power aware routing protocol in MANETs. Aim of this paper to helps the new researchers and application developers to explore an innovative idea for designing more efficient routing protocols. Keywords— Ad hoc Network Routing, Load Distribution, Energy Eff icient, Power Aware, Protocol Stack", "title": "" }, { "docid": "12524304546ca59b7e8acb2a7f6d6699", "text": "Multiple-choice items are a mainstay of achievement testing. The need to adequately cover the content domain to certify achievement proficiency by producing meaningful precise scores requires many high-quality items. More 3-option items can be administered than 4or 5-option items per testing time while improving content coverage, without detrimental effects on psychometric quality of test scores. Researchers have endorsed 3-option items for over 80 years with empirical evidence—the results of which have been synthesized in an effort to unify this endorsement and encourage its adoption.", "title": "" }, { "docid": "6f125b0a1f7de3402c1a6e2af72af506", "text": "The location-based service (LBS) of mobile communication and the personalization of information recommendation are two important trends in the development of electric commerce. However, many previous researches have only emphasized on one of the two trends. In this paper, we integrate the application of LBS with recommendation technologies to present a location-based service recommendation model (LBSRM) and design a prototype system to simulate and measure the validity of LBSRM. Due to the accumulation and variation of preference, in the recommendation model we conduct an adaptive method including long-term and short-term preference adjustment to enhance the result of recommendation. Research results show, with the assessments of relative index, the rate of recommendation precision could be 85.48%. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c74b967ecf7844843ee9389ba591b84e", "text": "We present an approach to human-robot interaction through gesture-free spoken dialogue. Our approach is based on passive knowledge rarefication through goal disambiguation, a technique that allows a human operator to collaborate with a mobile robot on various tasks through spoken dialogue without making bodily gestures. A key assumption underlying our approach is that the operator and the robot share a common set of goals. Another key idea is that language, vision, and action share common memory structures.We discuss how our approach achieves four types of human-robot interaction: command, goal disambiguation, introspection, and instruction-based learning. We describe the system we developed to implement our approach and present experimental results.", "title": "" }, { "docid": "7af9293fbe12f3e859ee579d0f8739a5", "text": "We present the findings from a Dutch field study of 30 outsourcing deals totaling to more than 100 million Euro, where both customers and corresponding IT-outsourcing providers participated. The main objective of the study was to examine from a number of well-known factors whether they discriminate between IT-outsourcing success and failure in the early phase of service delivery and to determine their impact on the chance on a successful deal. We investigated controllable factors to increase the odds during sourcing and rigid factors as a warning sign before closing a deal. Based on 250 interviews we collected 28 thousand data points. From the data and the perceived failure or success of the closed deals we investigated the discriminative power of the determinants (ex post). We found three statistically significant controllable factors that discriminated in an early phase between failure and success. They are: working according to the transition plan, demand management and, to our surprise, communication within the supplier organisation (so not between client and supplier). These factors also turned out to be the only significant factors for a (logistic) model predicting the chance of a successful IT-outsourcing. Improving demand management and internal communication at the supplier increases the odds the most. Sticking to the transition plan only modestly. Other controllable factors were not significant in our study. They are managing the business case, transfer of staff or assets, retention of expertise and communication within the client organisation. Of the rigid factors, the motive to outsource, cultural differences, and the type of work were insignificant. The motive of the supplier was significant: internal motivations like increasing profit margins or business volume decreased the chance of success while external motivations like increasing market share or becoming a player increased the success rate. From the data we inferred that the degree of experience with sourcing did not show to be a convincing factor of success. Hiring sourcing consultants worked contra-productive: it lowered chances of success.", "title": "" }, { "docid": "06abf54df209e736ada3a9a951b14300", "text": "In this paper we present arguments supported by research examples for a fundamental shift of emphasis in education and its relation to technology, in particular AItechnology. No longer the ITS-paradigm dominates the field of AI and Education. New educational and pedagogic paradigms are being proposed and investigated, stressing the importance of learning how to learn instead of merely learning domain facts and rules of application. New uses of technology accompany this shift. We present trends and issues in this area exemplified by research projects and characterise three pedagogical scenarios in order to situate different modelling options for AI & Education.", "title": "" }, { "docid": "0b22284d575fb5674f61529c367bb724", "text": "The scapula fulfils many roles to facilitate optimal function of the shoulder. Normal function of the shoulder joint requires a scapula that can be properly aligned in multiple planes of motion of the upper extremity. Scapular dyskinesis, meaning abnormal motion of the scapula during shoulder movement, is a clinical finding commonly encountered by shoulder surgeons. It is best considered an impairment of optimal shoulder function. As such, it may be the underlying cause or the accompanying result of many forms of shoulder pain and dysfunction. The present review looks at the causes and treatment options for this indicator of shoulder pathology and aims to provide an overview of the management of disorders of the scapula.", "title": "" }, { "docid": "25e0dfd4ad96bc80050a399f6355bfec", "text": "Advances in information technology and near ubiquity of the Internet have spawned novel modes of communication and unprecedented insights into human behavior via the digital footprint. Health behavior randomized controlled trials (RCTs), especially technology-based, can leverage these advances to improve the overall clinical trials management process and benefit from improvements at every stage, from recruitment and enrollment to engagement and retention. In this paper, we report the results for recruitment and retention of participants in the SMART study and introduce a new model for clinical trials management that is a result of interdisciplinary team science. The MARKIT model brings together best practices from information technology, marketing, and clinical research into a single framework to maximize efforts for recruitment, enrollment, engagement, and retention of participants into a RCT. These practices may have contributed to the study's on-time recruitment that was within budget, 86% retention at 24 months, and a minimum of 57% engagement with the intervention over the 2-year RCT. Use of technology in combination with marketing practices may enable investigators to reach a larger and more diverse community of participants to take part in technology-based clinical trials, help maximize limited resources, and lead to more cost-effective and efficient clinical trial management of study participants as modes of communication evolve among the target population of participants.", "title": "" }, { "docid": "9a9f54a0c7c561772d56e471cc1ab47d", "text": "Reliable and timely delivery of periodic V2V (vehicle-to-vehicle) broadcast messages is essential for realizing the benefits of connected vehicles. Existing MAC protocols for ad hoc networks fall short of meeting these requirements. In this paper, we present, CoReCast, the first collision embracing protocol for vehicular networks. CoReCast provides high reliability and low delay by leveraging two unique opportunities: no strict constraint on energy consumption, and availability of GPS clocks to achieve near-perfect time and frequency synchronization.\n Due to low coherence time, the channel changes rapidly in vehicular networks. CoReCast embraces packet collisions and takes advantage of the channel dynamics to decode collided packets. The design of CoReCast is based on a preamble detection scheme that estimates channels from multiple transmitters without any prior information about them. The proposed scheme reduces the space and time requirement exponentially than the existing schemes. The system is evaluated through experiments with USRP N210 and GPS devices placed in vehicles driven on roads in different environments as well as using trace-driven simulations. It provides 15x and 2x lower delay than 802.11p and OCP (Omniscient Clustering Protocol), respectively. Reliability of CoReCast is 8x and 2x better than 802.11p and OCP, respectively.", "title": "" }, { "docid": "2c27fc786dadb6c0d048fcf66b22ed59", "text": "Changes in DNA copy number contribute to cancer pathogenesis. We now show that high-density single nucleotide polymorphism (SNP) arrays can detect copy number alterations. By hybridizing genomic representations of breast and lung carcinoma cell line and lung tumor DNA to SNP arrays, and measuring locus-specific hybridization intensity, we detected both known and novel genomic amplifications and homozygous deletions in these cancer samples. Moreover, by combining genotyping with SNP quantitation, we could distinguish loss of heterozygosity events caused by hemizygous deletion from those that occur by copy-neutral events. The simultaneous measurement of DNA copy number changes and loss of heterozygosity events by SNP arrays should strengthen our ability to discover cancer-causing genes and to refine cancer diagnosis.", "title": "" }, { "docid": "ced697994e4e8f8c65b4a06dae42ddeb", "text": "Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. Both problems may be addressed by conditional generative models that are trained to adapt the generative distribution to additional input data. So far this idea was explored only under certain limitations such as restricting the input data to be a single object or multiple objects representing the same concept. In this work we develop a new class of deep generative model called generative matching networks which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks and the ideas from meta-learning. By conditioning on the additional input dataset, generative matching networks may instantly learn new concepts that were not available during the training but conform to a similar generative process, without explicit limitations on the number of additional input objects or the number of concepts they represent. Our experiments on the Omniglot dataset demonstrate that generative matching networks can significantly improve predictive performance on the fly as more additional data is available to the model and also adapt the latent space which is beneficial in the context of feature extraction.", "title": "" }, { "docid": "c4171bd7b870d26e0b2520fc262e7c88", "text": "Each year, the treatment decisions for more than 230, 000 breast cancer patients in the U.S. hinge on whether the cancer has metastasized away from the breast. Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labor intensive and error-prone. We present a framework to automatically detect and localize tumors as small as 100×100 pixels in gigapixel microscopy images sized 100, 000×100, 000 pixels. Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumor detection task. At 8 false positives per image, we detect 92.4% of the tumors, relative to 82.7% by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2% sensitivity. We achieve image-level AUC scores above 97% on both the Camelyon16 test set and an independent set of 110 slides. In addition, we discover that two slides in the Camelyon16 training set were erroneously labeled normal. Our approach could considerably reduce false negative rates in metastasis detection.", "title": "" }, { "docid": "d473619f76f81eced041df5bc012c246", "text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.", "title": "" }, { "docid": "cff459bd217bdbecefeceb70e3be5065", "text": "In this article we present FLUX-CiM, a novel method for extracting components (e.g., author names, article titles, venues, page numbers) from bibliographic citations. Our method does not rely on patterns encoding specific delimiters used in a particular citation style.This feature yields a high degree of automation and flexibility, and allows FLUX-CiM to extract from citations in any given format. Differently from previous methods that are based on models learned from user-driven training, our method relies on a knowledge base automatically constructed from an existing set of sample metadata records from a given field (e.g., computer science, health sciences, social sciences, etc.). These records are usually available on the Web or other public data repositories. To demonstrate the effectiveness and applicability of our proposed method, we present a series of experiments in which we apply it to extract bibliographic data from citations in articles of different fields. Results of these experiments exhibit precision and recall levels above 94% for all fields, and perfect extraction for the large majority of citations tested. In addition, in a comparison against a stateof-the-art information-extraction method, ours produced superior results without the training phase required by that method. Finally, we present a strategy for using bibliographic data resulting from the extraction process with FLUX-CiM to automatically update and expand the knowledge base of a given domain. We show that this strategy can be used to achieve good extraction results even if only a very small initial sample of bibliographic records is available for building the knowledge base.", "title": "" }, { "docid": "40c90bf58aae856c7c72bac573069173", "text": "Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a “distilled” policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust to hyperparameter settings and more stable—attributes that are critical in deep reinforcement learning.", "title": "" }, { "docid": "d9c4e90d9538c99206cc80bea2c1f808", "text": "Practical aspects of a real time auto parking controller are considered. A parking algorithm which can guarantee to find a parking path with any initial positions is proposed. The algorithm is theoretically proved and successfully applied to the OSU-ACT in the DARPA Urban Challenge 2007.", "title": "" }, { "docid": "86c3aefe7ab3fa2178da219f57bedf81", "text": "We present a model constructed for a large consumer products company to assess their vulnerability to disruption risk and quantify its impact on customer service. Risk profiles for the locations and connections in the supply chain are developed using Monte Carlo simulation, and the flow of material and network interactions are modeled using discrete-event simulation. Capturing both the risk profiles and material flow with simulation allows for a clear view of the impact of disruptions on the system. We also model various strategies for coping with the risk in the system in order to maintain product availability to the customer. We discuss the dynamic nature of risk in the network and the importance of proactive planning to mitigate and recover from disruptions.", "title": "" }, { "docid": "de34cb3489e58366f4aff7f05ba558c9", "text": "Current initiatives in the field of Business Process Management (BPM) strive for the development of a BPM standard notation by pushing the Business Process Modeling Notation (BPMN). However, such a proposed standard notation needs to be carefully examined. Ontological analysis is an established theoretical approach to evaluating modelling techniques. This paper reports on the outcomes of an ontological analysis of BPMN and explores identified issues by reporting on interviews conducted with BPMN users in Australia. Complementing this analysis we consolidate our findings with previous ontological analyses of process modelling notations to deliver a comprehensive assessment of BPMN.", "title": "" } ]
scidocsrr
0271486339a3185615f54cda636d8fbc
Semi-Supervised Generation with Cluster-aware Generative Models
[ { "docid": "93f89a636828df50dfe48ffa3e868ea6", "text": "The reparameterization trick enables the optimization of large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack continuous reparameterizations due to the discontinuous nature of discrete states. In this work we introduce concrete random variables – continuous relaxations of discrete random variables. The concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-likelihood of latent stochastic nodes) on the corresponding discrete graph. We demonstrate their effectiveness on density estimation and structured prediction tasks using neural networks.", "title": "" }, { "docid": "ecd8f70442aa40cd2088f4324fe0d247", "text": "Black box variational inference allows researchers to easily prototype and evaluate an array of models. Recent advances allow such algorithms to scale to high dimensions. However, a central question remains: How to specify an expressive variational distribution that maintains efficient computation? To address this, we develop hierarchical variational models (HVMs). HVMs augment a variational approximation with a prior on its parameters, which allows it to capture complex structure for both discrete and continuous latent variables. The algorithm we develop is black box, can be used for any HVM, and has the same computational efficiency as the original approximation. We study HVMs on a variety of deep discrete latent variable models. HVMs generalize other expressive variational distributions and maintains higher fidelity to the posterior.", "title": "" }, { "docid": "5245cdc023c612de89f36d1573d208fe", "text": "Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations.", "title": "" }, { "docid": "de018dc74dd255cf54d9c5597a1f9f73", "text": "Smoothness regularization is a popular method to decrease generalization error. We propose a novel regularization technique that rewards local distributional smoothness (LDS), a KLdistance based measure of the model’s robustness against perturbation. The LDS is defined in terms of the direction to which the model distribution is most sensitive in the input space. We call the training with LDS regularization virtual adversarial training (VAT). VAT resembles the adversarial training (Goodfellow et al., 2015), but distinguishes itself in that it determines the adversarial direction from the model distribution alone, and does not use the label information. The technique is therefore applicable even to semi-supervised learning. When we applied our technique to the classification task of the permutation invariant MNIST dataset, it not only eclipsed all the models that are not dependent on generative models and pre-training, but also performed well even in comparison to the state of the art method (Rasmus et al., 2015) that uses a highly advanced generative model.", "title": "" } ]
[ { "docid": "08473b813d0c9e3441d5293c8d1f1a12", "text": "We present the design, implementation, and informal evaluation of tactile interfaces for small touch screens used in mobile devices. We embedded a tactile apparatus in a Sony PDA touch screen and enhanced its basic GUI elements with tactile feedback. Instead of observing the response of interface controls, users can feel it with their fingers as they press the screen. In informal evaluations, tactile feedback was greeted with enthusiasm. We believe that tactile feedback will become the next step in touch screen interface design and a standard feature of future mobile devices.", "title": "" }, { "docid": "8df0689ffe5c730f7a6ef6da65bec57e", "text": "Image-based reconstruction of 3D shapes is inherently biased under the occurrence of interreflections, since the observed intensity at surface concavities consists of direct and global illumination components. This issue is commonly not considered in a Photometric Stereo (PS) framework. Under the usual assumption of only direct reflections, this corrupts the normal estimation process in concave regions and thus leads to inaccurate results. For this reason, global illumination effects need to be considered for the correct reconstruction of surfaces affected by interreflections. While there is ongoing research in the field of inverse lighting (i.e. separation of global and direct illumination components), the interreflection aspect remains oftentimes neglected in the field of 3D shape reconstruction. In this study, we present a computationally driven approach for iteratively solving that problem. Initially, we introduce a photometric stereo approach that roughly reconstructs a surface with at first unknown reflectance properties. Then, we show that the initial surface reconstruction result can be refined iteratively regarding non-distant light sources and, especially, interreflections. The benefit for the reconstruction accuracy is evaluated on real Lambertian surfaces using laser range scanner data as ground truth.", "title": "" }, { "docid": "90033efd960bf121e7041c9b3cd91cbd", "text": "In this paper, we propose a novel framework for integrating geometrical measurements of monocular visual simultaneous localization and mapping (SLAM) and depth prediction using a convolutional neural network (CNN). In our framework, SLAM-measured sparse features and CNN-predicted dense depth maps are fused to obtain a more accurate dense 3D reconstruction including scale. We continuously update an initial 3D mesh by integrating accurately tracked sparse features points. Compared to prior work on integrating SLAM and CNN estimates [26], there are two main differences: Using a 3D mesh representation allows as-rigid-as-possible update transformations. We further propose a system architecture suitable for mobile devices, where feature tracking and CNN-based depth prediction modules are separated, and only the former is run on the device. We evaluate the framework by comparing the 3D reconstruction result with 3D measurements obtained using an RGBD sensor, showing a reduction in the mean residual error of 38% compared to CNN-based depth map prediction alone.", "title": "" }, { "docid": "c385054322970c86d3f08b298aa811e2", "text": "Recently, a small number of papers have appeared in which the authors implement stochastic search algorithms, such as evolutionary computation, to generate game content, such as levels, rules and weapons. We propose a taxonomy of such approaches, centring on what sort of content is generated, how the content is represented, and how the quality of the content is evaluated. The relation between search-based and other types of procedural content generation is described, as are some of the main research challenges in this new field. The paper ends with some successful examples of this approach.", "title": "" }, { "docid": "9afd6e40fa049a27876dda7a714cc9db", "text": "PHP is a server-side scripting programming language that is widely used to develop website services. However, web-based PHP applications are distributed in source code so that the security is vulnerable and weak because the lines of source code can be easily copied, modified, or used in other applications. These research aims to implement obfuscation techniques design in PHP extension code using AES algorithm. The AES algorithm recommended by NIST (National Institute of Standards and Technology) to protect the US government's national information security system. Through obfuscation technique using encryption, it is expected that programmers have an option to protect the PHP source code so that the copyright or intellectual property of the program can be protected", "title": "" }, { "docid": "9fff08cf60bb5f6ec538080719aa8224", "text": "This research represents the runner BIB number recognition system to develop image processing study which solves problems and increases efficiency about runner image management in running fairs. The runner BIB number recognition system processes runner image to recognize BIB number and time when runner appears in media. The information from processing has collected to applicative later. BIB number position is on BIB tag which attach on runner body. To recognize BIB number, the system detects runner position first. This process emphasize on runner face detection in images following to concept of researcher then find BIB number in body-thigh area of runner. The system recognizes BIB number from BIB tag which represents in media. This processing presents 0.80 in precision value, 0.81 in recall value and F-measure is 0.80. The results display the runner BIB number recognition system has developed with high efficiency and can be applied for runner online communities in actual situation. The runner BIB number recognition system decreases problems about runner image processing and increases comfortable for runners when find images from running fairs. Moreover, the system can be applied in commercial to increase benefits in running business.", "title": "" }, { "docid": "ba5b5732dd7c48874e4f216903bba0b1", "text": "This article presents a review of the application of insole plantar pressure sensor system in recognition and analysis of the hemiplegic gait in stroke patients. Based on the review, tailor made 3D insoles for plantar pressure measurement were designed and fabricated. The function is to compare with that of conventional flat insoles. Tailor made 3D contour of the insole can improve the contact between insole and foot and enable sampling plantar pressure at a high reproducibility.", "title": "" }, { "docid": "aa29b992a92f958b7ac8ff8e1cb8cd19", "text": "Physically unclonable functions (PUFs) provide a device-unique challenge-response mapping and are employed for authentication and encryption purposes. Unpredictability and reliability are the core requirements of PUFs: unpredictability implies that an adversary cannot sufficiently predict future responses from previous observations. Reliability is important as it increases the reproducibility of PUF responses and hence allows validation of expected responses. However, advanced machine-learning algorithms have been shown to be a significant threat to the practical validity of PUFs, as they are able to accurately model PUF behavior. The most effective technique was shown to be the XOR-based combination of multiple PUFs, but as this approach drastically reduces reliability, it does not scale well against software-based machine-learning attacks. In this paper, we analyze threats to PUF security and propose PolyPUF, a scalable and secure architecture to introduce polymorphic PUF behavior. This architecture significantly increases model-building resistivity while maintaining reliability. An extensive experimental evaluation and comparison demonstrate that the PolyPUF architecture can secure various PUF configurations and is the only evaluated approach to withstand highly complex neural network machine-learning attacks. Furthermore, we show that PolyPUF consumes less energy and has less implementation overhead in comparison to lightweight reference architectures.", "title": "" }, { "docid": "b1d1571bbb260272e8679cc7a3f92cfe", "text": "This article overviews the enzymes produced by microorganisms, which have been extensively studied worldwide for their isolation, purification and characterization of their specific properties. Researchers have isolated specific microorganisms from extreme sources under extreme culture conditions, with the objective that such isolated microbes would possess the capability to bio-synthesize special enzymes. Various Bio-industries require enzymes possessing special characteristics for their applications in processing of substrates and raw materials. The microbial enzymes act as bio-catalysts to perform reactions in bio-processes in an economical and environmentally-friendly way as opposed to the use of chemical catalysts. The special characteristics of enzymes are exploited for their commercial interest and industrial applications, which include: thermotolerance, thermophilic nature, tolerance to a varied range of pH, stability of enzyme activity over a range of temperature and pH, and other harsh reaction conditions. Such enzymes have proven their utility in bio-industries such as food, leather, textiles, animal feed, and in bio-conversions and bio-remediations.", "title": "" }, { "docid": "df114396d546abfc9b6f1767e3bab8db", "text": "I briefly highlight the salient properties of modified-inertia formulations of MOND, contrasting them with those of modified-gravity formulations, which describe practically all theories propounded to date. Future data (e.g. the establishment of the Pioneer anomaly as a new physics phenomenon) may prefer one of these broad classes of theories over the other. I also outline some possible starting ideas for modified inertia. 1 Modified MOND inertia vs. modified MOND gravity MOND is a modification of non-relativistic dynamics involving an acceleration constant a 0. In the formal limit a 0 → 0 standard Newtonian dynamics is restored. In the deep MOND limit, a 0 → ∞, a 0 and G appear in the combination (Ga 0). Much of the NR phenomenology follows from this simple prescription, including the asymptotic flatness of rotation curves, the mass-velocity relations (baryonic Tully-fisher and Faber Jackson relations), mass discrepancies in LSB galaxies, etc.. There are many realizations (theories) that embody the above dictates, relativistic and non-relativistic. The possibly very significant fact that a 0 ∼ cH 0 ∼ c(Λ/3) 1/2 may hint at the origin of MOND, and is most probably telling us that a. MOND is an effective theory having to do with how the universe at large shapes local dynamics, and b. in a Lorentz universe (with H 0 = 0, Λ = 0) a 0 = 0 and standard dynamics holds. We can broadly classify modified theories into two classes (with the boundary not so sharply defined): In modified-gravity (MG) formulations the field equation of the gravitational field (potential, metric) is modified; the equations of motion of other degrees of freedom (DoF) in the field are not. In modified-inertia (MI) theories the opposite it true. More precisely, in theories derived from an action modifying inertia is tantamount to modifying the kinetic (free) actions of the non-gravitational degrees of freedom. Local, relativistic theories in which the kinetic", "title": "" }, { "docid": "fb173d15e079fcdf0cc222f558713f9c", "text": "Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoderdecoder model which is able to leverage the structure in addition to the content of the tables. Our experiments on the publicly available WEATHERGOV dataset show around 18 BLEU (∼ 30%) improvement over the current state-of-the-art.", "title": "" }, { "docid": "2332c8193181b5ad31e9424ca37b0f5a", "text": "The ability to grasp ordinary and potentially never-seen objects is an important feature in both domestic and industrial robotics. For a system to accomplish this, it must autonomously identify grasping locations by using information from various sensors, such as Microsoft Kinect 3D camera. Despite numerous progress, significant work still remains to be done in this field. To this effect, we propose a dictionary learning and sparse representation (DLSR) framework for representing RGBD images from 3D sensors in the context of determining such good grasping locations. In contrast to previously proposed approaches that relied on sophisticated regularization or very large datasets, the derived perception system has a fast training phase and can work with small datasets. It is also theoretically founded for dealing with masked-out entries, which are common with 3D sensors. We contribute by presenting a comparative study of several DLSR approach combinations for recognizing and detecting grasp candidates on the standard Cornell dataset. Importantly, experimental results show a performance improvement of 1.69% in detection and 3.16% in recognition over current state-of-the-art convolutional neural network (CNN). Even though nowadays most popular vision-based approach is CNN, this suggests that DLSR is also a viable alternative with interesting advantages that CNN has not.", "title": "" }, { "docid": "e8ecb3597e3019691f128cf6a50239d9", "text": "Unmanned Aerial Vehicle (UAV) platforms are nowadays a valuable source of data for inspection, surveillance, mapping and 3D modeling issues. As UAVs can be considered as a lowcost alternative to the classical manned aerial photogrammetry, new applications in the shortand close-range domain are introduced. Rotary or fixed wing UAVs, capable of performing the photogrammetric data acquisition with amateur or SLR digital cameras, can fly in manual, semiautomated and autonomous modes. Following a typical photogrammetric workflow, 3D results like Digital Surface or Terrain Models (DTM/DSM), contours, textured 3D models, vector information, etc. can be produced, even on large areas. The paper reports the state of the art of UAV for Geomatics applications, giving an overview of different UAV platforms, applications and case studies, showing also the latest developments of UAV image processing. New perspectives are also addressed.", "title": "" }, { "docid": "d9ef259a2a2997a8b447b7c711f7da32", "text": "Wireless Sensor Networks (WSNs) have attracted much attention in recent years. The potential applications of WSNs are immense. They are used for collecting, storing and sharing sensed data. WSNs have been used for various applications including habitat monitoring, agriculture, nuclear reactor control, security and tactical surveillance. The WSN system developed in this paper is for use in precision agriculture applications, where real time data of climatologically and other environmental properties are sensed and control decisions are taken based on it to modify them. The architecture of a WSN system comprises of a set of sensor nodes and a base station that communicate with each other and gather local information to make global decisions about the physical environment. The sensor network is based on the IEEE 802.15.4 standard and two topologies for this application.", "title": "" }, { "docid": "59b7afc5c2af7de75248c90fdf5c9cd3", "text": "Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.", "title": "" }, { "docid": "dd270ffa800d633a7a354180eb3d426c", "text": "I have taken an experimental approach to this question. Freely voluntary acts are pre ceded by a specific electrical change in the brain (the ‘readiness potential’, RP) that begins 550 ms before the act. Human subjects became aware of intention to act 350–400 ms after RP starts, but 200 ms. before the motor act. The volitional process is therefore initiated unconsciously. But the conscious function could still control the outcome; it can veto the act. Free will is therefore not excluded. These findings put constraints on views of how free will may operate; it would not initiate a voluntary act but it could control performance of the act. The findings also affect views of guilt and responsibility. But the deeper question still remains: Are freely voluntary acts subject to macro deterministic laws or can they appear without such constraints, non-determined by natural laws and ‘truly free’? I shall present an experimentalist view about these fundamental philosophical opposites.", "title": "" }, { "docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94", "text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.", "title": "" }, { "docid": "5b763dbb9f06ff67e44b5d38920e92bf", "text": "With the growing popularity of the internet, everything is available at our doorstep and convenience. The rapid increase in e-commerce applications has resulted in the increased usage of the credit card for offline and online payments. Though there are various benefits of using credit cards such as convenience, instant cash, but when it comes to security credit card holders, banks, and the merchants are affected when the card is being stolen, lost or misused without the knowledge of the cardholder (Fraud activity). Streaming analytics is a time-based processing of data and it is used to enable near real-time decision making by inspecting, correlating and analyzing the data even as it is streaming into applications and database from myriad different sources. We are making use of streaming analytics to detect and prevent the credit card fraud. Rather than singling out specific transactions, our solution analyses the historical transaction data to model a system that can detect fraudulent patterns. This model is then used to analyze transactions in real-time.", "title": "" }, { "docid": "f5128625b3687c971ba3bef98d7c2d2a", "text": "In three experiments, we investigated the influence of juror, victim, and case factors on mock jurors' decisions in several types of child sexual assault cases (incest, day care, stranger abduction, and teacher-perpetrated abuse). We also validated and tested the ability of several scales measuring empathy for child victims, children's believability, and opposition to adult/child sex, to mediate the effect of jurors' gender on case judgments. Supporting a theoretical model derived from research on the perceived credibility of adult rape victims, women compared to men were more empathic toward child victims, more opposed to adult/child sex, more pro-women, and more inclined to believe children generally. In turn, women (versus men) made more pro-victim judgments in hypothetical abuse cases; that is, attitudes and empathy generally mediated this juror gender effect that is pervasive in this literature. The experiments also revealed that strength of case evidence is a powerful factor in determining judgments, and that teen victims (14 years old) are blamed more for sexual abuse than are younger children (5 years old), but that perceptions of 5 and 10 year olds are largely similar. Our last experiment illustrated that our findings of mediation generalize to a community member sample.", "title": "" }, { "docid": "ff707f7c041a13ff3fcd1efd91c7103a", "text": "We conceptualize and propose a theoretical model of sellers’ trust in buyers in the cross border ecommerce context. This model is based on by signalling theory, which is further refined by using trust theories and empirical findings from prior e-commerce trust research.", "title": "" } ]
scidocsrr
619af58106996778a9284d09e402b378
The Ontology of Biological and Clinical Statistics (OBCS) for standardized and reproducible statistical analysis
[ { "docid": "d258a14fc9e64ba612f2c8ea77f85d08", "text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.", "title": "" } ]
[ { "docid": "1183b3ea7dd929de2c18af49bf549ceb", "text": "Robust and time-efficient skeletonization of a (planar) shape, which is connectivity preserving and based on Euclidean metrics, can be achieved by first regularizing the Voronoi diagram (VD) of a shape’s boundary points, i.e., by removal of noise-sensitive parts of the tessellation and then by establishing a hierarchic organization of skeleton constituents . Each component of the VD is attributed with a measure of prominence which exhibits the expected invariance under geometric transformations and noise. The second processing step, a hierarchic clustering of skeleton branches, leads to a multiresolution representation of the skeleton, termed skeleton pyramid. Index terms — Distance transform, hierarchic skeletons, medial axis, regularization, shape description, thinning, Voronoi tessellation.", "title": "" }, { "docid": "bf57a5fcf6db7a9b26090bd9a4b65784", "text": "Plate osteosynthesis is still recognized as the treatment of choice for most articular fractures, many metaphyseal fractures, and certain diaphyseal fractures such as in the forearm. Since the 1960s, both the techniques and implants used for internal fixation with plates have evolved to provide for improved healing. Most recently, plating methods have focused on the principles of 'biological fixation'. These methods attempt to preserve the blood supply to improve the rate of fracture healing, decrease the need for bone grafting, and decrease the incidence of infection and re-fracture. The purpose of this article is to provide a brief overview of the history of plate osteosynthesis as it relates to the development of the latest minimally invasive surgical techniques.", "title": "" }, { "docid": "5ddbaa58635d706215ae3d61fe13e46c", "text": "Recent years have seen growing interest in the problem of sup er-resolution restoration of video sequences. Whereas in the traditional single image re storation problem only a single input image is available for processing, the task of reconst ructing super-resolution images from multiple undersampled and degraded images can take adv antage of the additional spatiotemporal data available in the image sequence. In particula r, camera and scene motion lead to frames in the source video sequence containing similar, b ut not identical information. The additional information available in these frames make poss ible reconstruction of visually superior frames at higher resolution than that of the original d ta. In this paper we review the current state of the art and identify promising directions f or future research. The authors are with the Laboratory for Image and Signal Analysis (LIS A), University of Notre Dame, Notre Dame, IN 46556. E-mail: rls@nd.edu .", "title": "" }, { "docid": "553de71fcc3e4e6660015632eee751b1", "text": "Data governance is an emerging research area getting attention from information systems (IS) scholars and practitioners. In this paper I take a look at existing literature and current state-of-the-art in data governance. I found out that there is only a limited amount of existing scientific literature, but many practitioners are already treating data as a valuable corporate asset. The paper describes an action design research project that will be conducted in 2012-2016 and is expected to result in a generic data governance framework.", "title": "" }, { "docid": "9d26e9a4b5694588c7957067d9586df5", "text": "Converters for telecom DC/DC power supply applications often require an output voltage somewhere within a wide range of input voltages. While the design of traditional converters will come with a heavy penalty in terms of component stresses and losses, and with the restrictions on the output voltage. Besides that, the high efficiency around the nominal input is another restriction for traditional converters. A controlling scheme for the four switch buck-boost converter is proposed to achieve high efficiency within the line range and the highest efficiency around the nominal input. A 48 V(36-75 V) input 12 V@25 A output two-stage prototype composed of the proposed converter and a full bridge converter is built in the lab. The experimental results verified the analysis.", "title": "" }, { "docid": "84301afe8fa5912dc386baab84dda7ea", "text": "There is a growing understanding that machine learning architectures have to be much bigger and more complex to approach any intelligent behavior. There is also a growing understanding that purely supervised learning is inadequate to train such systems. A recent paradigm of artificial recurrent neural network (RNN) training under the umbrella-name Reservoir Computing (RC) demonstrated that training big recurrent networks (the reservoirs) differently than supervised readouts from them is often better. It started with Echo State Networks (ESNs) and Liquid State Machines ten years ago where the reservoir was generated randomly and only linear readouts from it were trained. Rather surprisingly, such simply and fast trained ESNs outperformed classical fully-trained RNNs in many tasks. While full supervised training of RNNs is problematic, intuitively there should also be something better than a random network. In recent years RC became a vivid research field extending the initial paradigm from fixed random reservoir and trained output into using different methods for training the reservoir and the readout. In this thesis we overview existing and investigate new alternatives to the classical supervised training of RNNs and their hierarchies. First we present a taxonomy and a systematic overview of the RNN training approaches under the RC umbrella. Second, we propose and investigate the use of two different neural network models for the reservoirs together with several unsupervised adaptation techniques, as well as unsupervisedly layer-wise trained deep hierarchies of such models. We rigorously empirically test the proposed methods on two temporal pattern recognition datasets, comparing it to the classical reservoir computing state of art.", "title": "" }, { "docid": "81cd2034b2096db2be699821e499dfa8", "text": "At the US National Library of Medicine we have developed the Unified Medical Language System (UMLS), whose goal it is to provide integrated access to a large number of biomedical resources by unifying the vocabularies that are used to access those resources. The UMLS currently interrelates some 60 controlled vocabularies in the biomedical domain. The UMLS coverage is quite extensive, including not only many concepts in clinical medicine, but also a large number of concepts applicable to the broad domain of the life sciences. In order to provide an overarching conceptual framework for all UMLS concepts, we developed an upper-level ontology, called the UMLS semantic network. The semantic network, through its 134 semantic types, provides a consistent categorization of all concepts represented in the UMLS. The 54 links between the semantic types provide the structure for the network and represent important relationships in the biomedical domain. Because of the growing number of information resources that contain genetic information, the UMLS coverage in this area is being expanded. We recently integrated the taxonomy of organisms developed by the NLM's National Center for Biotechnology Information, and we are currently working together with the developers of the Gene Ontology to integrate this resource, as well. As additional, standard, ontologies become publicly available, we expect to integrate these into the UMLS construct.", "title": "" }, { "docid": "c3bf7e7556dba69d4e3ff40e6b40be17", "text": "A frequency-domain parametric study using generalized consistent transmitting boundaries has been performed to evaluate the significance of topographic effects on the seismic response of steep slopes. The results show that the peak amplification of motion at the crest of a slope occurs at a normalized frequency 1t/2 = 0.2, where H is the slope height and 2 is the wavelength of the motion. The importance of the natural site frequency is illustrated by the analysis of a stepped layer over a half-space. It was found that the natural frequency of the region behind the crest can dominate the response, relative to the topographic effect, for the conditions studied. Moreover, the effect of topography can be handled separately from the amplification due to the natural frequency of the deposit behind the crest of the slope. This concept of separating the amplification caused by topography from that caused by the natural frequency is advantageous to the development of a simplified method to estimate topographic effects.", "title": "" }, { "docid": "1e100608fd78b1e20020f892784199ed", "text": "In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method [1]. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset.", "title": "" }, { "docid": "aec273859fedb6550c461548e9ab7c53", "text": "In this paper, we describe our contribution for the NTCIR-13 Short Text Conversation (STC) Chinese task. Short text conversation remains an important part on social media gathering much attention recently. The task aims to retrieve or generate a relevant comment given a post. We consider both closed and open domain STC for retrieval–based and generation-based track. To be more specific, the former applies a retrieval-based approach from the given corpus, while the later utilizes the Web to fulfill the generation-based track. Evaluation results show that our retrieval–based approach performs better than the generation-based one.", "title": "" }, { "docid": "8e60cc41d66f234705383b86f7282499", "text": "A self-powered autonomous RFID device with sensing and computing capabilities is presented in this paper. Powered by an RF energy-harvesting circuit enhanced by a DC-DC voltage booster in silicon-on-insulator (SOI) technology, the device relies on a microcontroller and a new generation I2C-RFID chip to wirelessly deliver sensor data to standard RFID EPC Class-1 Generation-2 (Gen2) readers. When the RF power received from the interrogating reader is -14 dBm or higher, the device, fabricated on an FR4 substrate using low-cost discrete components, is able to produce 2.4-V DC voltage to power its circuitry. The experimental results demonstrate the effectiveness of the device to perform reliable sensor data transmissions up to 5 meters in fully-passive mode. To the best of our knowledge, this represents the longest read range ever reported for passive UHF RFID sensors compliant with the EPC Gen2 standard.", "title": "" }, { "docid": "e517370f733c10190da90c834f0f486a", "text": "The planning and organization of athletic training have historically been much discussed and debated in the coaching and sports science literature. Various influential periodization theorists have devised, promoted, and substantiated particular training-planning models based on interpretation of the scientific evidence and individual beliefs and experiences. Superficially, these proposed planning models appear to differ substantially. However, at a deeper level, it can be suggested that such models share a deep-rooted cultural heritage underpinned by a common set of historically pervasive planning beliefs and assumptions. A concern with certain of these formative assumptions is that, although no longer scientifically justifiable, their shaping influence remains deeply embedded. In recent years substantial evidence has emerged demonstrating that training responses vary extensively, depending upon multiple underlying factors. Such findings challenge the appropriateness of applying generic methodologies, founded in overly simplistic rule-based decision making, to the planning problems posed by inherently complex biological systems. The purpose of this review is not to suggest a whole-scale rejection of periodization theories but to promote a refined awareness of their various strengths and weaknesses. Eminent periodization theorists-and their variously proposed periodization models-have contributed substantially to the evolution of training-planning practice. However, there is a logical line of reasoning suggesting an urgent need for periodization theories to be realigned with contemporary elite practice and modern scientific conceptual models. In concluding, it is recommended that increased emphasis be placed on the design and implementation of sensitive and responsive training systems that facilitate the guided emergence of customized context-specific training-planning solutions.", "title": "" }, { "docid": "cd977d0e24fd9e26e90f2cf449141842", "text": "Several leadership and ethics scholars suggest that the transformational leadership process is predicated on a divergent set of ethical values compared to transactional leadership. Theoretical accounts declare that deontological ethics should be associated with transformational leadership while transactional leadership is likely related to teleological ethics. However, very little empirical research supports these claims. Furthermore, despite calls for increasing attention as to how leaders influence their followers’ perceptions of the importance of ethics and corporate social responsibility (CSR) for organizational effectiveness, no empirical study to date has assessed the comparative impact of transformational and transactional leadership styles on follower CSR attitudes. Data from 122 organizational leaders and 458 of their followers indicated that leader deontological ethical values (altruism, universal rights, Kantian principles, etc.) were strongly associated with follower ratings of transformational leadership, while leader teleological ethical values (utilitarianism) were related to follower ratings of transactional leadership. As predicted, only transformational leadership was associated with follower beliefs in the stakeholder view of CSR. Implications for the study and practice of ethical leadership, future research directions, and management education are discussed.", "title": "" }, { "docid": "9a4e9c73465d1026c2f5c91ec17eaf74", "text": "Devising an expressive question taxonomy is a central problem in question generation. Through examination of a corpus of human-human taskoriented tutoring, we have found that existing question taxonomies do not capture all of the tutorial questions present in this form of tutoring. We propose a hierarchical question classification scheme for tutorial questions in which the top level corresponds to the tutor’s goal and the second level corresponds to the question type. The application of this hierarchical classification scheme to a corpus of keyboard-to-keyboard tutoring of introductory computer science yielded high inter-rater reliability, suggesting that such a scheme is appropriate for classifying tutor questions in design-oriented tutoring. We discuss numerous open issues that are highlighted by the current analysis.", "title": "" }, { "docid": "161dad1b4abe8e4657bf3e3c5e8cb68c", "text": "Gratitude is an important aspect of human sociality, and is valued by religions and moral philosophies. It has been established that gratitude leads to benefits for both mental health and interpersonal relationships. It is thus important to elucidate the neurobiological correlates of gratitude, which are only now beginning to be investigated. To this end, we conducted an experiment during which we induced gratitude in participants while they underwent functional magnetic resonance imaging. We hypothesized that gratitude ratings would correlate with activity in brain regions associated with moral cognition, value judgment and theory of mind. The stimuli used to elicit gratitude were drawn from stories of survivors of the Holocaust, as many survivors report being sheltered by strangers or receiving lifesaving food and clothing, and having strong feelings of gratitude for such gifts. The participants were asked to place themselves in the context of the Holocaust and imagine what their own experience would feel like if they received such gifts. For each gift, they rated how grateful they felt. The results revealed that ratings of gratitude correlated with brain activity in the anterior cingulate cortex and medial prefrontal cortex, in support of our hypotheses. The results provide a window into the brain circuitry for moral cognition and positive emotion that accompanies the experience of benefitting from the goodwill of others.", "title": "" }, { "docid": "11b11bf5be63452e28a30b4494c9a704", "text": "Advertisement and Brand awareness plays an important role in brand building, brand recognition, brand loyalty and boost up the sales performance which is regarded as the foundation for brand development. To some degree advertisement and brand awareness can directly influence consumers’ buying behavior. The female consumers from IT industry have been taken as main consumers for the research purpose. The researcher seeks to inspect and investigate brand’s intention factors and consumer’s individual factors in influencing advertisement and its impact of brand awareness on fast moving consumer goods especially personal care products .The aim of the paper is to examine the advertising and its impact of brand awareness towards FMCG Products, on the other hand, to analyze the influence of advertising on personal care products among female consumers in IT industry and finally to study the impact of media on advertising & brand awareness. The prescribed survey were conducted in the form of questionnaire and found valid and reliable for this research. After evaluating some questions, better questionnaires were developed. Then the questionnaires were distributed among 200 female consumers with a response rate of 100%. We found that advertising has constantly a significant positive effect on brand awareness and consumers perceive the brand awareness with positive attitude. Findings depicts that advertising and brand awareness have strong positive influence and considerable relationship with purchase intention of the consumer. This research highlights that female consumers of personal care products in IT industry are more brand conscious and aware about their personal care products. Advertisement and brand awareness affects their purchase intention positively; also advertising media positively influences the brand awareness and purchase intention of the female consumers. The obtained data were then processed by Pearson correlation, multiple regression analysis and ANOVA. A Study On Advertising And Its Impact Of Brand Awareness On Fast Moving Consumer Goods With Reference To Personal Care Products In Chennai Paper ID IJIFR/ V2/ E9/ 068 Page No. 3325-3333 Subject Area Business Administration", "title": "" }, { "docid": "394fe4987b2d452a32168f243d69488a", "text": "Understanding when and how much shoulder muscles are active during upper extremity sports is helpful to physicians, therapists, trainers and coaches in providing appropriate treatment, training and rehabilitation protocols to these athletes. This review focuses on shoulder muscle activity (rotator cuff, deltoids, pectoralis major, latissimus dorsi, triceps and biceps brachii, and scapular muscles) during the baseball pitch, the American football throw, the windmill softball pitch, the volleyball serve and spike, the tennis serve and volley, baseball hitting, and the golf swing. Because shoulder electromyography (EMG) data are far more extensive for overhead throwing activities compared with non-throwing upper extremity sports, much of this review focuses on shoulder EMG during the overhead throwing motion. Throughout this review shoulder kinematic and kinetic data (when available) are integrated with shoulder EMG data to help better understand why certain muscles are active during different phases of an activity, what type of muscle action (eccentric or concentric) occurs, and to provide insight into the shoulder injury mechanism. Kinematic, kinetic and EMG data have been reported extensively during overhead throwing, such as baseball pitching and football passing. Because shoulder forces, torques and muscle activity are generally greatest during the arm cocking and arm deceleration phases of overhead throwing, it is believed that most shoulder injuries occur during these phases. During overhead throwing, high rotator cuff muscle activity is generated to help resist the high shoulder distractive forces approximately 80-120% bodyweight during the arm cocking and deceleration phases. During arm cocking, peak rotator cuff activity is 49-99% of a maximum voluntary isometric contraction (MVIC) in baseball pitching and 41-67% MVIC in football throwing. During arm deceleration, peak rotator cuff activity is 37-84% MVIC in baseball pitching and 86-95% MVIC in football throwing. Peak rotator cuff activity is also high is the windmill softball pitch (75-93% MVIC), the volleyball serve and spike (54-71% MVIC), the tennis serve and volley (40-113% MVIC), baseball hitting (28-39% MVIC), and the golf swing (28-68% MVIC). Peak scapular muscle activity is also high during the arm cocking and arm deceleration phases of baseball pitching, with peak serratus anterior activity 69-106% MVIC, peak upper, middle and lower trapezius activity 51-78% MVIC, peak rhomboids activity 41-45% MVIC, and peak levator scapulae activity 33-72% MVIC. Moreover, peak serratus anterior activity was approximately 60% MVIC during the windmill softball pitch, approximately 75% MVIC during the tennis serve and forehand and backhand volley, approximately 30-40% MVIC during baseball hitting, and approximately 70% MVIC during the golf swing. In addition, during the golf swing, peak upper, middle and lower trapezius activity was 42-52% MVIC, peak rhomboids activity was approximately 60% MVIC, and peak levator scapulae activity was approximately 60% MVIC.", "title": "" }, { "docid": "103f432e237567c2954490e8ef257fe7", "text": "Pierre Bourdieu holds the Chair in Sociology at the prestigious College de France, Paris. He is Directeur d'Etudes at l'Ecole des Hautes Etudes en Sciences Sociales, where he is also Director of the Center for European Sociology, and Editor of the influential journal Actes de la recherche en sciences sociales. Professor Bourdieu is the author or coauthor of approximately twenty books. A number of these have been published in English translation: The Algerians, 1962; Reproduction in Education, Society and Culture (with Jean-Claude Passeron), 1977; Outline of a Theory of Practice, 1977; Algeria I960, 1979; The Inheritors: French Students and their Relations to Culture, 1979; Distinction: A Social Critique of the Judgment of Taste, 1984. The essay below analyzes what Bourdieu terms the \"juridical field.\" In Bourdieu's conception, a \"field\" is an area of structured, socially patterned activity or \"practice,\" in this case disciplinarily and professionally defined. The \"field\" and its \"practices\" have special senses in", "title": "" }, { "docid": "bf16ccf68804d05201ad7a6f0a2920fe", "text": "The purpose of this paper is to review and discuss public performance management in general and performance appraisal and pay for performance specifically. Performance is a topic that is a popular catch-cry and performance management has become a new organizational ideology. Under the global economic crisis, almost every public and private organization is struggling with a performance challenge, one way or another. Various aspects of performance management have been extensively discussed in the literature. Many researchers and experts assert that sets of guidelines for design of performance management systems would lead to high performance (Kaplan and Norton, 1996, 2006). A long time ago, the traditional performance measurement was developed from cost and management accounting and such purely financial perspective of performance measures was perceived to be inappropriate so that multi-dimensional performance management was development in the 1970s (Radnor and McGuire, 2004).", "title": "" } ]
scidocsrr
b9e98124971c2fd8d827fdfa00b51993
Do Less and Achieve More: Training CNNs for Action Recognition Utilizing Action Images from the Web
[ { "docid": "c439a5c8405d8ba7f831a5ac4b1576a7", "text": "1. Cao, L., Liu, Z., Huang, T.S.: Cross-dataset action detection. In: CVPR (2010). 2. Yang, Y., Ramanan, D.: Articulated pose estimation with flexible mixtures-of-parts. In: CVPR (2011) 3. Lan, T., etc.: Discriminative figure-centric models for joint action localization and recognition. In: ICCV (2011). 4. Tian, Y., Sukthankar, R., Shah, M.: Spatiotemporal deformable part models for action detection. In: CVPR (2013). 5. Wang, H., Schmid, C.: Action recognition with improved trajectories. In: ICCV (2013). Experiments", "title": "" } ]
[ { "docid": "790d30535edadb8e6318b6907b8553f3", "text": "Learning to anticipate future events on the basis of past experience with the consequences of one's own behavior (operant conditioning) is a simple form of learning that humans share with most other animals, including invertebrates. Three model organisms have recently made significant contributions towards a mechanistic model of operant conditioning, because of their special technical advantages. Research using the fruit fly Drosophila melanogaster implicated the ignorant gene in operant conditioning in the heat-box, research on the sea slug Aplysia californica contributed a cellular mechanism of behavior selection at a convergence point of operant behavior and reward, and research on the pond snail Lymnaea stagnalis elucidated the role of a behavior-initiating neuron in operant conditioning. These insights demonstrate the usefulness of a variety of invertebrate model systems to complement and stimulate research in vertebrates.", "title": "" }, { "docid": "581ec70f1a056cb344825e66ad203c69", "text": "A new approach to achieve coalescence and sintering of metallic nanoparticles at room temperature is presented. It was discovered that silver nanoparticles behave as soft particles when they come into contact with oppositely charged polyelectrolytes and undergo a spontaneous coalescence process, even without heating. Utilizing this finding in printing conductive patterns, which are composed of silver nanoparticles, enables achieving high conductivities even at room temperature. Due to the sintering of nanoparticles at room temperature, the formation of conductive patterns on plastic substrates and even on paper is made possible. The resulting high conductivity, 20% of that for bulk silver, enabled fabrication of various devices as demonstrated by inkjet printing of a plastic electroluminescent device.", "title": "" }, { "docid": "ab148ea69cf884b2653823b350ed5cfc", "text": "The application of information retrieval techniques to search tasks in software engineering is made difficult by the lexical gap between search queries, usually expressed in natural language (e.g. English), and retrieved documents, usually expressed in code (e.g. programming languages). This is often the case in bug and feature location, community question answering, or more generally the communication between technical personnel and non-technical stake holders in a software project. In this paper, we propose bridging the lexical gap by projecting natural language statements and code snippets as meaning vectors in a shared representation space. In the proposed architecture, word embeddings are first trained on API documents, tutorials, and reference documents, and then aggregated in order to estimate semantic similarities between documents. Empirical evaluations show that the learned vector space embeddings lead to improvements in a previously explored bug localization task and a newly defined task of linking API documents to computer programming questions.", "title": "" }, { "docid": "287572e1c394ec6959853f62b7707233", "text": "This paper presents a method for state estimation on a ballbot; i.e., a robot balancing on a single sphere. Within the framework of an extended Kalman filter and by utilizing a complete kinematic model of the robot, sensory information from different sources is combined and fused to obtain accurate estimates of the robot's attitude, velocity, and position. This information is to be used for state feedback control of the dynamically unstable system. Three incremental encoders (attached to the omniwheels that drive the ball of the robot) as well as three rate gyroscopes and accelerometers (attached to the robot's main body) are used as sensors. For the presented method, observability is proven analytically for all essential states in the system, and the algorithm is experimentally evaluated on the Ballbot Rezero.", "title": "" }, { "docid": "ff81d8b7bdc5abbd9ada376881722c02", "text": "Along with the progress of miniaturization and energy saving technologies of sensors, biological information in our daily life can be monitored by installing the sensors to a lavatory bowl. Lavatory is usually shared among several people, therefore biological information need to be identified. Using camera, microphone, or scales is not appropriate considering privacy in a lavatory. In this paper, we focus on the difference in the way of pulling a toilet paper roll and propose a system that identifies individuals based on features of rotation of a toilet paper roll with a gyroscope. The evaluation results confirmed that 85.8% accuracy was achieved for a five-people group in a laboratory environment.", "title": "" }, { "docid": "3d8a102c53c6e594e01afc7ad685c7ab", "text": "As register allocation is one of the most important phases in optimizing compilers, much work has been done to improve its quality and speed. We present a novel register allocation architecture for programs in SSA-form which simplifies register allocation significantly. We investigate certain properties of SSA-programs and their interference graphs, showing that they belong to the class of chordal graphs. This leads to a quadratic-time optimal coloring algorithm and allows for decoupling the tasks of coloring, spilling and coalescing completely. After presenting heuristic methods for spilling and coalescing, we compare our coalescing heuristic to an optimal method based on integer linear programming.", "title": "" }, { "docid": "e0223a5563e107308c88a43df5b1c8ba", "text": "One question central to Reinforcement Learning is how to learn a feature representation that supports algorithm scaling and re-use of learned information from different tasks. Successor Features approach this problem by learning a feature representation that satisfies a temporal constraint. We present an implementation of an approach that decouples the feature representation from the reward function, making it suitable for transferring knowledge between domains. We then assess the advantages and limitations of using Successor Features for transfer.", "title": "" }, { "docid": "2f8a07428a5ba3b51f4c990d0de18370", "text": "Pain is a common and distressing symptom in critically ill patients. Uncontrolled pain places patients at risk for numerous adverse psychological and physiological consequences, some of which may be life-threatening. A systematic assessment of pain is difficult in intensive care units because of the high percentage of patients who are noncommunicative and unable to self-report pain. Several tools have been developed to identify objective measures of pain, but the best tool has yet to be identified. A comprehensive search on the reliability and validity of observational pain scales indicated that although the Critical-Care Pain Observation Tool was superior to other tools in reliably detecting pain, pain assessment in individuals incapable of spontaneous neuromuscular movements or in patients with concurrent conditions, such as chronic pain or delirium, remains an enigma.", "title": "" }, { "docid": "69566105ef6c731e410e21e8ad6d5749", "text": "Despite advances in fingerprint matching, partial/incomplete/fragmentary fingerprint recognition remains a challenging task. While miniaturization of fingerprint scanners limits the capture of only part of the fingerprint, there is also special interest in processing latent fingerprints which are likely to be partial and of low quality. Partial fingerprints do not include all the structures available in a full fingerprint, hence a suitable matching technique which is independent of specific fingerprint features is required. Common fingerprint recognition methods are based on fingerprint minutiae which do not perform well when applied to low quality images and might not even be suitable for partial fingerprint recognition. To overcome this drawback, in this research, a region-based fingerprint recognition method is proposed in which the fingerprints are compared in a pixel- wise manner by computing their correlation coefficient. Therefore, all the attributes of the fingerprint contribute in the matching decision. Such a technique is promising to accurately recognise a partial fingerprint as well as a full fingerprint compared to the minutiae-based fingerprint recognition methods.The proposed method is based on simple but effective metrics that has been defined to compute local similarities which is then combined into a global score such that it is less affected by distribution skew of the local similarities. Extensive experiments over Fingerprint Verification Competition (FVC) data set proves the superiority of the proposed method compared to other techniques in literature.", "title": "" }, { "docid": "1196ab65ddfcedb8775835f2e176576f", "text": "Faster R-CNN achieves state-of-the-art performance on generic object detection. However, a simple application of this method to a large vehicle dataset performs unimpressively. In this paper, we take a closer look at this approach as it applies to vehicle detection. We conduct a wide range of experiments and provide a comprehensive analysis of the underlying structure of this model. We show that through suitable parameter tuning and algorithmic modification, we can significantly improve the performance of Faster R-CNN on vehicle detection and achieve competitive results on the KITTI vehicle dataset. We believe our studies are instructive for other researchers investigating the application of Faster R-CNN to their problems and datasets.", "title": "" }, { "docid": "dfa611e19a3827c66ea863041a3ef1e2", "text": "We study the problem of malleability of Bitcoin transactions. Our first two contributions can be summarized as follows: (i) we perform practical experiments on Bitcoin that show that it is very easy to maul Bitcoin transactions with high probability, and (ii) we analyze the behavior of the popular Bitcoin wallets in the situation when their transactions are mauled; we conclude that most of them are to some extend not able to handle this situation correctly. The contributions in points (i) and (ii) are experimental. We also address a more theoretical problem of protecting the Bitcoin distributed contracts against the “malleability” attacks. It is well-known that malleability can pose serious problems in some of those contracts. It concerns mostly the protocols which use a “refund” transaction to withdraw a financial deposit in case the other party interrupts the protocol. Our third contribution is as follows: (iii) we show a general method for dealing with the transaction malleability in Bitcoin contracts. In short: this is achieved by creating a malleability-resilient “refund” transaction which does not require any modification of the Bitcoin protocol.", "title": "" }, { "docid": "cd587b4f35290bf779b0c7ee0214ab72", "text": "Time series data is perhaps the most frequently encountered type of data examined by the data mining community. Clustering is perhaps the most frequently used data mining algorithm, being useful in it's own right as an exploratory technique, and also as a subroutine in more complex data mining algorithms such as rule discovery, indexing, summarization, anomaly detection, and classification. Given these two facts, it is hardly surprising that time series clustering has attracted much attention. The data to be clustered can be in one of two formats: many individual time series, or a single time series, from which individual time series are extracted with a sliding window. Given the recent explosion of interest in streaming data and online algorithms, the latter case has received much attention.In this work we make a surprising claim. Clustering of streaming time series is completely meaningless. More concretely, clusters extracted from streaming time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature.We can justify calling our claim surprising, since it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work. Although the primary contribution of our work is to draw attention to the fact that an apparent solution to an important problem is incorrect and should no longer be used, we also introduce a novel method which, based on the concept of time series motifs, is able to meaningfully cluster some streaming time series datasets.", "title": "" }, { "docid": "c2fb2e46eea33dcf9ec1872de5d57272", "text": "Computational Drug Discovery, which uses computational techniques to facilitate and improve the drug discovery process, has aroused considerable interests in recent years. Drug Repositioning (DR) and DrugDrug Interaction (DDI) prediction are two key problems in drug discovery and many computational techniques have been proposed for them in the last decade. Although these two problems have mostly been researched separately in the past, both DR and DDI can be formulated as the problem of detecting positive interactions between data entities (DR is between drug and disease, and DDI is between pairwise drugs). The challenge in both problems is that we can only observe a very small portion of positive interactions. In this paper, we propose a novel framework called Dyadic PositiveUnlabeled learning (DyPU) to solve the problem of detecting positive interactions. DyPU forces positive data pairs to rank higher than the average score of unlabeled data pairs. Moreover, we also derive the dual formulation of the proposed method with the rectifier scoring function and we show that the associated non-trivial proximal operator admits a closed form solution. Extensive experiments are conducted on real drug data sets and the results show that our method achieves superior performance comparing with the state-of-the-art.", "title": "" }, { "docid": "64baa8b11855ad6333ae67f18c6b56b0", "text": "The covariance matrix adaptation evolution strategy (CMA-ES) rates among the most successful evolutionary algorithms for continuous parameter optimization. Nevertheless, it is plagued with some drawbacks like the complexity of the adaptation process and the reliance on a number of sophisticatedly constructed strategy parameter formulae for which no or little theoretical substantiation is available. Furthermore, the CMA-ES does not work well for large population sizes. In this paper, we propose an alternative – simpler – adaptation step of the covariance matrix which is closer to the ”traditional” mutative self-adaptation. We compare the newly proposed algorithm, which we term the CMSA-ES, with the CMA-ES on a number of different test functions and are able to demonstrate its superiority in particular for large population sizes.", "title": "" }, { "docid": "a6fd8b8506a933a7cc0530c6ccda03a8", "text": "Native ecosystems are continuously being transformed mostly into agricultural lands. Simultaneously, a large proportion of fields are abandoned after some years of use. Without any intervention, altered landscapes usually show a slow reversion to native ecosystems, or to novel ecosystems. One of the main barriers to vegetation regeneration is poor propagule supply. Many restoration programs have already implemented the use of artificial perches in order to increase seed availability in open areas where bird dispersal is limited by the lack of trees. To evaluate the effectiveness of this practice, we performed a series of meta-analyses comparing the use of artificial perches versus control sites without perches. We found that setting-up artificial perches increases the abundance and richness of seeds that arrive in altered areas surrounding native ecosystems. Moreover, density of seedlings is also higher in open areas with artificial perches than in control sites without perches. Taken together, our results support the use of artificial perches to overcome the problem of poor seed availability in degraded fields, promoting and/or accelerating the restoration of vegetation in concordance with the surrounding landscape.", "title": "" }, { "docid": "b6376259827dfc04f7c7c037631443f3", "text": "In this brief, a low-power flip-flop (FF) design featuring an explicit type pulse-triggered structure and a modified true single phase clock latch based on a signal feed-through scheme is presented. The proposed design successfully solves the long discharging path problem in conventional explicit type pulse-triggered FF (P-FF) designs and achieves better speed and power performance. Based on post-layout simulation results using TSMC CMOS 90-nm technology, the proposed design outperforms the conventional P-FF design data-close-to-output (ep-DCO) by 8.2% in data-to-Q delay. In the mean time, the performance edges on power and power- delay-product metrics are 22.7% and 29.7%, respectively.", "title": "" }, { "docid": "5a7568e877d5e1c2f2c50f98e95c5471", "text": "This paper presents an efficient method for finding matches to a given regular expression in given text using FPGAs. To match a regular expression of length n, a serial machine requires 0(2^n) memory and takes 0(1) time per text character. The proposed approach reqiures only 0(n^2) space and still process a text character in 0(1) time (one clock cycle).The improvement is due to the Nondetermineistic Finite Automaton (NFA) used to perform the matching. As far as the authors are aware, this is the first prctical use of a nondeterministic state machine on programmable logic. Furthermore, the paper presents a simple, fast algorithm that quickly constructs the NFA for the given regular expression. Fast NFA construction is crucial because the NFA structure depends on the regular expression, which is known only at runtime. Implementations of the algorithm for conventional FPGAs and the self-reconfigurable Gate Array (SRGA) are described. To evaluate performance, the NFA logic was mapped onto the Virtex XCV100 FPGA and the SRGA. Also, the performance of GNU grep for matching regular expressions was evaluated on an 800 MHz Pentium III machine. The proposed approach was faster than best case grep performance in most cases. It was orders of magnitude faster than worst case grep performance. Logic for the largest NFA considered fit in less than a 1000 CLBs while DFA storage for grep in the worst case consumed a few hundred megabytes.", "title": "" }, { "docid": "109644763e3a5ee5f59ec8e83719cc8d", "text": "The field of Natural Language Processing (NLP) is growing rapidly, with new research published daily along with an abundance of tutorials, codebases and other online resources. In order to learn this dynamic field or stay up-to-date on the latest research, students as well as educators and researchers must constantly sift through multiple sources to find valuable, relevant information. To address this situation, we introduce TutorialBank, a new, publicly available dataset which aims to facilitate NLP education and research. We have manually collected and categorized over 6,300 resources on NLP as well as the related fields of Artificial Intelligence (AI), Machine Learning (ML) and Information Retrieval (IR). Our dataset is notably the largest manually-picked corpus of resources intended for NLP education which does not include only academic papers. Additionally, we have created both a search engine 1 and a command-line tool for the resources and have annotated the corpus to include lists of research topics, relevant resources for each topic, prerequisite relations among topics, relevant subparts of individual resources, among other annotations. We are releasing the dataset and present several avenues for further research.", "title": "" }, { "docid": "353bfff6127e57660a918d4120ccf3d3", "text": "Deep learning techniques have demonstrated significant capacity in modeling some of the most challenging real world problems of high complexity. Despite the popularity of deep models, we still strive to better understand the underlying mechanism that drives their success. Motivated by observations that neurons in trained deep nets predict variation explaining factors indirectly related to the training tasks, we recognize that a deep network learns representations more general than the task at hand in order to disentangle impacts of multiple confounding factors governing the data, isolate the effects of the concerning factors, and optimize the given objective. Consequently, we propose to augment training of deep models with auxiliary information on explanatory factors of the data, in an effort to boost this disentanglement. Such deep networks, trained to comprehend data interactions and distributions more accurately, possess improved generalizability and compute better feature representations. Since pose is one of the most dominant confounding factors for object recognition, we adopt this principle to train a pose-aware deep convolutional neural network to learn both the class and pose of an object, so that it can make more informed classification decisions taking into account image variations induced by the object pose. We demonstrate that auxiliary pose information improves the classification accuracy in our experiments on Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) tasks. This general principle is readily applicable to improve the recognition and classification performance in various deep-learning applications.", "title": "" } ]
scidocsrr
34f1c513c4eaa53c1b3e8a5cf849f62a
Crowdsourcing in the cultural heritage domain: opportunities and challenges
[ { "docid": "393d3f3061940f98e5f3e4ed919f7f6d", "text": "Through online games, people can collectively solve large-scale computational problems. E ach year, people around the world spend billions of hours playing computer games. What if all this time and energy could be channeled into useful work? What if people playing computer games could, without consciously doing so, simultaneously solve large-scale problems? Despite colossal advances over the past 50 years, computers still don't possess the basic conceptual intelligence or perceptual capabilities that most humans take for granted. If we treat human brains as processors in a distributed system, each can perform a small part of a massive computation. Such a \" human computation \" paradigm has enormous potential to address problems that computers can't yet tackle on their own and eventually teach computers many of these human talents. Unlike computer processors, humans require some incentive to become part of a collective computation. Online games are a seductive method for encouraging people to participate in the process. Such games constitute a general mechanism for using brain power to solve open problems. In fact, designing such a game is much like designing an algorithm—it must be proven correct, its efficiency can be analyzed, a more efficient version can supersede a less efficient one, and so on. Instead of using a silicon processor, these \" algorithms \" run on a processor consisting of ordinary humans interacting with computers over the Internet. \" Games with a purpose \" have a vast range of applications in areas as diverse as security, computer vision, Internet accessibility, adult content filtering , and Internet search. Two such games under development at Carnegie Mellon University, the ESP Game and Peekaboom, demonstrate how humans , as they play, can solve problems that computers can't yet solve. Several important online applications such as search engines and accessibility programs for the visually impaired require accurate image descriptions. However, there are no guidelines about providing appropriate textual descriptions for the millions of images on the Web, and computer vision can't yet accurately determine their content. Current techniques used to categorize images for these applications are inadequate, largely because they assume that image content on a Web page is related to adjacent text. Unfortunately, the text near an image is often scarce or misleading and can be hard to process. Manual labeling is traditionally the only method for obtaining precise image descriptions, but this tedious and labor-intensive process is extremely costly. The ESP Game …", "title": "" } ]
[ { "docid": "262d91525f42ead887c8f8d50a5782fd", "text": "Over the past decade, machine learning techniques especially predictive modeling and pattern recognition in biomedical sciences from drug delivery system [7] to medical imaging has become one of the important methods which are assisting researchers to have deeper understanding of entire issue and to solve complex medical problems. Deep learning is power learning machine learning algorithm in classification while extracting high-level features. In this paper, we used convolutional neural network to classify Alzheimer’s brain from normal healthy brain. The importance of classifying this kind of medical data is to potentially develop a predict model or system in order to recognize the type disease from normal subjects or to estimate the stage of the disease. Classification of clinical data such as Alzheimer’s disease has been always challenging and most problematic part has been always selecting the most discriminative features. Using Convolutional Neural Network (CNN) and the famous architecture LeNet-5, we successfully classified functional MRI data of Alzheimer’s subjects from normal controls where the accuracy of test data on trained data reached 96.85%. This experiment suggests us the shift and scale invariant features extracted by CNN followed by deep learning classification is most powerful method to distinguish clinical data from healthy data in fMRI. This approach also enables us to expand our methodology to predict more complicated systems.", "title": "" }, { "docid": "90d06c97cdf3b67a81345f284d839c25", "text": "Open information extraction is an important task in Biomedical domain. The goal of the OpenIE is to automatically extract structured information from unstructured text with no or little supervision. It aims to extract all the relation tuples from the corpus without requiring pre-specified relation types. The existing tools may extract ill-structured or incomplete information, or fail on the Biomedical literature due to the long and complicated sentences. In this paper, we propose a novel pattern-based information extraction method for the wide-window entities (WW-PIE). WW-PIE utilizes dependency parsing to break down the long sentences first and then utilizes frequent textual patterns to extract the high-quality information. The pattern hierarchical grouping organize and structure the extractions to be straightforward and precise. Consequently, comparing with the existing OpenIE tools, WW-PIE produces structured output that can be directly used for downstream applications. The proposed WW-PIE is also capable in extracting n-ary and nested relation structures, which is less studied in the existing methods. Extensive experiments on real-world biomedical corpus from PubMed abstracts demonstrate the power of WW-PIE at extracting precise and well-structured information.", "title": "" }, { "docid": "f2a9d15d9b38738d563f9d9f9fa5d245", "text": "Mobile cellular networks have become both the generators and carriers of massive data. Big data analytics can improve the performance of mobile cellular networks and maximize the revenue of operators. In this paper, we introduce a unified data model based on the random matrix theory and machine learning. Then, we present an architectural framework for applying the big data analytics in the mobile cellular networks. Moreover, we describe several illustrative examples, including big signaling data, big traffic data, big location data, big radio waveforms data, and big heterogeneous data, in mobile cellular networks. Finally, we discuss a number of open research challenges of the big data analytics in the mobile cellular networks.", "title": "" }, { "docid": "252f5488232f7437ff886b79e2e7014e", "text": "Typical video footage captured using an off-the-shelf camcorder suffers from limited dynamic range. This paper describes our approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame. Our approach consists of three parts: automatic exposure control during capture, HDR stitching across neighboring frames, and tonemapping for viewing. HDR stitching requires accurately registering neighboring frames and choosing appropriate pixels for computing the radiance map. We show examples for a variety of dynamic scenes. We also show how we can compensate for scene and camera movement when creating an HDR still from a series of bracketed still photographs.", "title": "" }, { "docid": "467637b1f55d4673d0ddd5322a130979", "text": "In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator (<inline-formula> <tex-math notation=\"LaTeX\">$H^{*}H$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation=\"LaTeX\">$H^{*}$ </tex-math></inline-formula> is the adjoint of the forward imaging operator, <inline-formula> <tex-math notation=\"LaTeX\">$H$ </tex-math></inline-formula>) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a <inline-formula> <tex-math notation=\"LaTeX\">$512\\times 512$ </tex-math></inline-formula> image on the GPU.", "title": "" }, { "docid": "47ea90e34fc95a941bc127ad8ccd2ca9", "text": "The ever increasing number of cyber attacks requires the cyber security and forensic specialists to detect, analyze and defend against the cyber threats in almost real-time. In practice, timely dealing with such a large number of attacks is not possible without deeply perusing the attack features and taking corresponding intelligent defensive actions—this in essence defines cyber threat intelligence notion. However, such an intelligence would not be possible without the aid of artificial intelligence, machine learning and advanced data mining techniques to collect, analyse, and interpret cyber attack evidences. In this introductory chapter we first discuss the notion of cyber threat intelligence and its main challenges and opportunities, and then briefly introduce the chapters of the book which either address the identified challenges or present opportunistic solutions to provide threat intelligence.", "title": "" }, { "docid": "1db42d9d65737129fa08a6ad4d52d27e", "text": "This study introduces a unique prototype system for structural health monitoring (SHM), SmartSync, which uses the building’s existing Internet backbone as a system of virtual instrumentation cables to permit modular and largely plug-and-play deployments. Within this framework, data streams from distributed heterogeneous sensors are pushed through network interfaces in real time and seamlessly synchronized and aggregated by a centralized server, which performs basic data acquisition, event triggering, and database management while also providing an interface for data visualization and analysis that can be securely accessed. The system enables a scalable approach to monitoring tall and complex structures that can readily interface a variety of sensors and data formats (analog and digital) and can even accommodate variable sampling rates. This study overviews the SmartSync system, its installation/operation in theworld’s tallest building, Burj Khalifa, and proof-of-concept in triggering under dual excitations (wind and earthquake).DOI: 10.1061/(ASCE)ST.1943-541X.0000560. © 2013 American Society of Civil Engineers. CE Database subject headings: High-rise buildings; Structural health monitoring; Wind loads; Earthquakes. Author keywords: Tall buildings; Structural health monitoring; System identification.", "title": "" }, { "docid": "4ee6894fade929db82af9cb62fecc0f9", "text": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.", "title": "" }, { "docid": "e660a3407d3ae46995054764549adc35", "text": "The factors predicting stress, anxiety and depression in the parents of children with autism remain poorly understood. In this study, a cohort of 250 mothers and 229 fathers of one or more children with autism completed a questionnaire assessing reported parental mental health problems, locus of control, social support, perceived parent-child attachment, as well as autism symptom severity and perceived externalizing behaviours in the child with autism. Variables assessing parental cognitions and socioeconomic support were found to be more significant predictors of parental mental health problems than child-centric variables. A path model, describing the relationship between the dependent and independent variables, was found to be a good fit with the observed data for both mothers and fathers.", "title": "" }, { "docid": "db7bc8bbfd7dd778b2900973f2cfc18d", "text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.", "title": "" }, { "docid": "6e8d30f3eaaf6c88dddb203c7b703a92", "text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.", "title": "" }, { "docid": "79b74db73c30fa38239f3d6b84ee5443", "text": "Optimizing an interactive system against a predefined online metric is particularly challenging, especially when the metric is computed from user feedback such as clicks and payments. The key challenge is the counterfactual nature: in the case of Web search, any change to a component of the search engine may result in a different search result page for the same query, but we normally cannot infer reliably from search log how users would react to the new result page. Consequently, it appears impossible to accurately estimate online metrics that depend on user feedback, unless the new engine is actually run to serve live users and compared with a baseline in a controlled experiment. This approach, while valid and successful, is unfortunately expensive and time-consuming. In this paper, we propose to address this problem using causal inference techniques, under the contextual-bandit framework. This approach effectively allows one to run potentially many online experiments offline from search log, making it possible to estimate and optimize online metrics quickly and inexpensively. Focusing on an important component in a commercial search engine, we show how these ideas can be instantiated and applied, and obtain very promising results that suggest the wide applicability of these techniques.", "title": "" }, { "docid": "ead461ea8f716f6fab42c08bb7b54728", "text": "Despite the increasing importance of data quality and the rich theoretical and practical contributions in all aspects of data cleaning, there is no single end-to-end off-the-shelf solution to (semi-)automate the detection and the repairing of violations w.r.t. a set of heterogeneous and ad-hoc quality constraints. In short, there is no commodity platform similar to general purpose DBMSs that can be easily customized and deployed to solve application-specific data quality problems. In this paper, we present NADEEF, an extensible, generalized and easy-to-deploy data cleaning platform. NADEEF distinguishes between a programming interface and a core to achieve generality and extensibility. The programming interface allows the users to specify multiple types of data quality rules, which uniformly define what is wrong with the data and (possibly) how to repair it through writing code that implements predefined classes. We show that the programming interface can be used to express many types of data quality rules beyond the well known CFDs (FDs), MDs and ETL rules. Treating user implemented interfaces as black-boxes, the core provides algorithms to detect errors and to clean data. The core is designed in a way to allow cleaning algorithms to cope with multiple rules holistically, i.e. detecting and repairing data errors without differentiating between various types of rules. We showcase two implementations for core repairing algorithms. These two implementations demonstrate the extensibility of our core, which can also be replaced by other user-provided algorithms. Using real-life data, we experimentally verify the generality, extensibility, and effectiveness of our system.", "title": "" }, { "docid": "422b5a17be6923df4b90eaadf3ed0748", "text": "Hate speech is currently of broad and current interest in the domain of social media. The anonymity and flexibility afforded by the Internet has made it easy for users to communicate in an aggressive manner. And as the amount of online hate speech is increasing, methods that automatically detect hate speech is very much required. Moreover, these problems have also been attracting the Natural Language Processing and Machine Learning communities a lot. Therefore, the goal of this paper is to look at how Natural Language Processing applies in detecting hate speech. Furthermore, this paper also applies a current technique in this field on a dataset. As neural network approaches outperforms existing methods for text classification problems, a deep learning model has been introduced, namely the Convolutional Neural Network. This classifier assigns each tweet to one of the categories of a Twitter dataset: hate, offensive language, and neither. The performance of this model has been tested using the accuracy, as well as looking at the precision, recall and F-score. The final model resulted in an accuracy of 91%, precision of 91%, recall of 90% and a F-measure of 90%. However, when looking at each class separately, it should be noted that a lot of hate tweets have been misclassified. Therefore, it is recommended to further analyze the predictions and errors, such that more insight is gained on the misclassification.", "title": "" }, { "docid": "97382e18c9ca7c42d8b6c908cde761f2", "text": "In recent years, heatmap regression based models have shown their effectiveness in face alignment and pose estimation. However, Conventional Heatmap Regression (CHR) is not accurate nor stable when dealing with high-resolution facial videos, since it finds the maximum activated location in heatmaps which are generated from rounding coordinates, and thus leads to quantization errors when scaling back to the original high-resolution space. In this paper, we propose a Fractional Heatmap Regression (FHR) for high-resolution video-based face alignment. The proposed FHR can accurately estimate the fractional part according to the 2D Gaussian function by sampling three points in heatmaps. To further stabilize the landmarks among continuous video frames while maintaining the precise at the same time, we propose a novel stabilization loss that contains two terms to address time delay and non-smooth issues, respectively. Experiments on 300W, 300VW and Talking Face datasets clearly demonstrate that the proposed method is more accurate and stable than the state-ofthe-art models. Introduction Face alignment aims to estimate a set of facial landmarks given a face image or video sequence. It is a classic computer vision problem that has attributed to many advanced machine learning algorithms Fan et al. (2018); Bulat and Tzimiropoulos (2017); Trigeorgis et al. (2016); Peng et al. (2015, 2016); Kowalski, Naruniec, and Trzcinski (2017); Chen et al. (2017); Liu et al. (2017); Hu et al. (2018). Nowadays, with the rapid development of consumer hardwares (e.g., mobile phones, digital cameras), High-Resolution (HR) video sequences can be easily collected. Estimating facial landmarks on such highresolution facial data has tremendous applications, e.g., face makeup Chen, Shen, and Jia (2017), editing with special effects Korshunova et al. (2017) in live broadcast videos. However, most existing face alinement methods work on faces with medium image resolutions Chen et al. (2017); Bulat and Tzimiropoulos (2017); Peng et al. (2016); Liu et al. (2017). Therefore, developing face alignment algorithms for high-resolution videos is at the core of this paper. To this end, we propose an accurate and stable algorithm for high-resolution video-based face alignment, named Fractional Heatmap Regression (FHR). It is well known that ∗ indicates equal contributions. Conventional Heatmap Regression (CHR) Loss Fractional Heatmap Regression (FHR) Loss 930 744 411", "title": "" }, { "docid": "22eb9b1de056d03d15c0a3774a898cfd", "text": "Massive volumes of big RDF data are growing beyond the performance capacity of conventional RDF data management systems operating on a single node. Applications using large RDF data demand efficient data partitioning solutions for supporting RDF data access on a cluster of compute nodes. In this paper we present a novel semantic hash partitioning approach and implement a Semantic HAsh Partitioning-Enabled distributed RDF data management system, called Shape. This paper makes three original contributions. First, the semantic hash partitioning approach we propose extends the simple hash partitioning method through direction-based triple groups and direction-based triple replications. The latter enhances the former by controlled data replication through intelligent utilization of data access locality, such that queries over big RDF graphs can be processed with zero or very small amount of inter-machine communication cost. Second, we generate locality-optimized query execution plans that are more efficient than popular multi-node RDF data management systems by effectively minimizing the inter-machine communication cost for query processing. Third but not the least, we provide a suite of locality-aware optimization techniques to further reduce the partition size and cut down on the inter-machine communication cost during distributed query processing. Experimental results show that our system scales well and can process big RDF datasets more efficiently than existing approaches.", "title": "" }, { "docid": "462256d2d428f8c77269e4593518d675", "text": "This paper is devoted to the modeling of real textured images by functional minimization and partial differential equations. Following the ideas of Yves Meyer in a total variation minimization framework of L. Rudin, S. Osher, and E. Fatemi, we decompose a given (possible textured) image f into a sum of two functions u+v, where u ¥ BV is a function of bounded variation (a cartoon or sketchy approximation of f), while v is a function representing the texture or noise. To model v we use the space of oscillating functions introduced by Yves Meyer, which is in some sense the dual of the BV space. The new algorithm is very simple, making use of differential equations and is easily solved in practice. Finally, we implement the method by finite differences, and we present various numerical results on real textured images, showing the obtained decomposition u+v, but we also show how the method can be used for texture discrimination and texture segmentation.", "title": "" }, { "docid": "459a3bc8f54b8f7ece09d5800af7c37b", "text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies.", "title": "" }, { "docid": "9a10716e1d7e24b790fb5dd48ad254ab", "text": "Probabilistic models based on Bayes' rule are an increasingly popular approach to understanding human cognition. Bayesian models allow immense representational latitude and complexity. Because they use normative Bayesian mathematics to process those representations, they define optimal performance on a given task. This article focuses on key mechanisms of Bayesian information processing, and provides numerous examples illustrating Bayesian approaches to the study of human cognition. We start by providing an overview of Bayesian modeling and Bayesian networks. We then describe three types of information processing operations-inference, parameter learning, and structure learning-in both Bayesian networks and human cognition. This is followed by a discussion of the important roles of prior knowledge and of active learning. We conclude by outlining some challenges for Bayesian models of human cognition that will need to be addressed by future research. WIREs Cogn Sci 2011 2 8-21 DOI: 10.1002/wcs.80 For further resources related to this article, please visit the WIREs website.", "title": "" }, { "docid": "b53e5d6054b684990e9c5c1e5d2b6b7d", "text": "Automatic Dependent Surveillance-Broadcast (ADS-B) is one of the key technologies for future “e-Enabled” aircrafts. ADS-B uses avionics in the e-Enabled aircrafts to broadcast essential flight data such as call sign, altitude, heading, and other extra positioning information. On the one hand, ADS-B brings significant benefits to the aviation industry, but, on the other hand, it could pose security concerns as channels between ground controllers and aircrafts for the ADS-B communication are not secured, and ADS-B messages could be captured by random individuals who own ADS-B receivers. In certain situations, ADS-B messages contain sensitive information, particularly when communications occur among mission-critical civil airplanes. These messages need to be protected from any interruption and eavesdropping. The challenge here is to construct an encryption scheme that is fast enough for very frequent encryption and that is flexible enough for effective key management. In this paper, we propose a Staged Identity-Based Encryption (SIBE) scheme, which modifies Boneh and Franklin's original IBE scheme to address those challenges, that is, to construct an efficient and functional encryption scheme for ADS-B system. Based on the proposed SIBE scheme, we provide a confidentiality framework for future e-Enabled aircraft with ADS-B capability.", "title": "" } ]
scidocsrr
91755439aad358564ff668278390cb45
Radio Frequency Energy Harvesting and Management for Wireless Sensor Networks
[ { "docid": "3c4e1c7fd5dbdf5ea50eeed1afe23ff9", "text": "Power management is an important concern in sensor networks, because a tethered energy infrastructure is usually not available and an obvious concern is to use the available battery energy efficiently. However, in some of the sensor networking applications, an additional facility is available to ameliorate the energy problem: harvesting energy from the environment. Certain considerations in using an energy harvesting source are fundamentally different from that in using a battery, because, rather than a limit on the maximum energy, it has a limit on the maximum rate at which the energy can be used. Further, the harvested energy availability typically varies with time in a nondeterministic manner. While a deterministic metric, such as residual battery, suffices to characterize the energy availability in the case of batteries, a more sophisticated characterization may be required for a harvesting source. Another issue that becomes important in networked systems with multiple harvesting nodes is that different nodes may have different harvesting opportunity. In a distributed application, the same end-user performance may be achieved using different workload allocations, and resultant energy consumptions at multiple nodes. In this case, it is important to align the workload allocation with the energy availability at the harvesting nodes. We consider the above issues in power management for energy-harvesting sensor networks. We develop abstractions to characterize the complex time varying nature of such sources with analytically tractable models and use them to address key design issues. We also develop distributed methods to efficiently use harvested energy and test these both in simulation and experimentally on an energy-harvesting sensor network, prototyped for this work.", "title": "" } ]
[ { "docid": "600673953f89f29f2f9c3fe73cac1d13", "text": "The multivariate regression model is considered with p regressors. A latent vector with p binary entries serves to identify one of two types of regression coef®cients: those close to 0 and those not. Specializing our general distributional setting to the linear model with Gaussian errors and using natural conjugate prior distributions, we derive the marginal posterior distribution of the binary latent vector. Fast algorithms aid its direct computation, and in high dimensions these are supplemented by a Markov chain Monte Carlo approach to sampling from the known posterior distribution. Problems with hundreds of regressor variables become quite feasible. We give a simple method of assigning the hyperparameters of the prior distribution. The posterior predictive distribution is derived and the approach illustrated on compositional analysis of data involving three sugars with 160 near infra-red absorbances as regressors.", "title": "" }, { "docid": "4c0557527bb445c7d641028e2d88005f", "text": "Small printed antennas will replace the commonly used normal-mode helical antennas of mobile handsets and systems in the future. This paper presents a novel small planar inverted-F antenna (PIFA) which is a common PIFA in which a U-shaped slot is etched to form a dual band operation for wearable and ubiquitous computing equipment. Health issues are considered in selecting suitable antenna topology and the placement of the antenna. Various applications are presented while the paper mainly discusses about the GSM applications.", "title": "" }, { "docid": "2d4fd6da60cad3b6a427bd406f16d6fa", "text": "BACKGROUND\nVarious cutaneous side-effects, including, exanthema, pruritus, urticaria and Lyell or Stevens-Johnson syndrome, have been reported with meropenem (carbapenem), a rarely-prescribed antibiotic. Levofloxacin (fluoroquinolone), a more frequently prescribed antibiotic, has similar cutaneous side-effects, as well as photosensitivity. We report a case of cutaneous hyperpigmentation induced by meropenem and levofloxacin.\n\n\nPATIENTS AND METHODS\nA 67-year-old male was treated with meropenem (1g×4 daily), levofloxacin (500mg twice daily) and amikacin (500mg daily) for 2 weeks, followed by meropenem, levofloxacin and rifampicin (600mg twice daily) for 4 weeks for osteitis of the fifth metatarsal. Three weeks after initiation of antibiotic therapy, dark hyperpigmentation appeared on the lower limbs, predominantly on the anterior aspects of the legs. Histology revealed dark, perivascular and interstitial deposits throughout the dermis, which stained with both Fontana-Masson and Perls stains. Infrared microspectroscopy revealed meropenem in the dermis of involved skin. After withdrawal of the antibiotics, the pigmentation subsided slowly.\n\n\nDISCUSSION\nSimilar cases of cutaneous hyperpigmentation have been reported after use of minocycline. In these cases, histological examination also showed iron and/or melanin deposits within the dermis, but the nature of the causative pigment remains unclear. In our case, infrared spectroscopy enabled us to identify meropenem in the dermis. Two cases of cutaneous hyperpigmentation have been reported following use of levofloxacin, and the results of histological examination were similar. This is the first case of cutaneous hyperpigmentation induced by meropenem.", "title": "" }, { "docid": "6f0a2dee696eab0fb42113af2c8a2ad7", "text": "OBJECTIVES\nTo evaluate whether the overgrowth of costal cartilage may cause pectus carinatum using three-dimensional (3D) computed tomography (CT).\n\n\nMETHODS\nTwenty-two patients with asymmetric pectus carinatum were included. The fourth, fifth and sixth ribs and costal cartilages were semi-automatically traced, and their full lengths were measured on three-dimensional CT images using curved multi-planar reformatted (MPR) techniques. The rib length and costal cartilage length, the total combined length of the rib and costal cartilage and the ratio of the cartilage and rib lengths (C/R ratio) in each patient were compared between the protruding side and the opposite side at the levels of the fourth, fifth and sixth ribs.\n\n\nRESULTS\nThe length of the costal cartilage was not different between the more protruded side and the contralateral side (55.8 ± 9.8 mm vs 55.9 ± 9.3 mm at the fourth, 70 ± 10.8 mm vs 71.6 ± 10.8 mm at the fifth and 97.8 ± 13.2 mm vs 99.8 ± 15.5 mm at the sixth; P > 0.05). There were also no significant differences between the lengths of ribs. (265.8 ± 34.9 mm vs 266.3 ± 32.9 mm at the fourth, 279.7 ± 32.7 mm vs 280.6 ± 32.4 mm at the fifth and 283.8 ± 33.9 mm vs 283.9 ± 32.3 mm at the sixth; P > 0.05). There was no statistically significant difference in either the total length of rib and costal cartilage or the C/R ratio according to side of the chest (P > 0.05).\n\n\nCONCLUSIONS\nIn patients with asymmetric pectus carinatum, the lengths of the fourth, fifth and sixth costal cartilage on the more protruded side were not different from those on the contralateral side. These findings suggest that overgrowth of costal cartilage cannot explain the asymmetric protrusion of anterior chest wall and may not be the main cause of pectus carinatum.", "title": "" }, { "docid": "ba3636b17e9a5d1cb3d8755afb1b3500", "text": "Anabolic-androgenic steroids (AAS) are used as ergogenic aids by athletes and non-athletes to enhance performance by augmenting muscular development and strength. AAS administration is often associated with various adverse effects that are generally dose related. High and multi-doses of AAS used for athletic enhancement can lead to serious and irreversible organ damage. Among the most common adverse effects of AAS are some degree of reduced fertility and gynecomastia in males and masculinization in women and children. Other adverse effects include hypertension and atherosclerosis, blood clotting, jaundice, hepatic neoplasms and carcinoma, tendon damage, psychiatric and behavioral disorders. More specifically, this article reviews the reproductive, hepatic, cardiovascular, hematological, cerebrovascular, musculoskeletal, endocrine, renal, immunologic and psychologic effects. Drug-prevention counseling to athletes is highlighted and the use of anabolic steroids is must be avoided, emphasizing that sports goals may be met within the framework of honest competition, free of doping substances.", "title": "" }, { "docid": "a36fae7ccd3105b58a4977b5a2366ee8", "text": "As the number of big data management systems continues to grow, users increasingly seek to leverage multiple systems in the context of a single data analysis task. To efficiently support such hybrid analytics, we develop a tool called PipeGen for efficient data transfer between database management systems (DBMSs). PipeGen automatically generates data pipes between DBMSs by leveraging their functionality to transfer data via disk files using common data formats such as CSV. PipeGen creates data pipes by extending such functionality with efficient binary data transfer capabilities that avoid file system materialization, include multiple important format optimizations, and transfer data in parallel when possible. We evaluate our PipeGen prototype by generating 20 data pipes automatically between five different DBMSs. The results show that PipeGen speeds up data transfer by up to 3.8× as compared to transferring using disk files.", "title": "" }, { "docid": "b8fa50df3c76c2192c67cda7ae4d05f5", "text": "Task parallelism has increasingly become a trend with programming models such as OpenMP 3.0, Cilk, Java Concurrency, X10, Chapel and Habanero-Java (HJ) to address the requirements of multicore programmers. While task parallelism increases productivity by allowing the programmer to express multiple levels of parallelism, it can also lead to performance degradation due to increased overheads. In this article, we introduce a transformation framework for optimizing task-parallel programs with a focus on task creation and task termination operations. These operations can appear explicitly in constructs such as async, finish in X10 and HJ, task, taskwait in OpenMP 3.0, and spawn, sync in Cilk, or implicitly in composite code statements such as foreach and ateach loops in X10, forall and foreach loops in HJ, and parallel loop in OpenMP.\n Our framework includes a definition of data dependence in task-parallel programs, a happens-before analysis algorithm, and a range of program transformations for optimizing task parallelism. Broadly, our transformations cover three different but interrelated optimizations: (1) finish-elimination, (2) forall-coarsening, and (3) loop-chunking. Finish-elimination removes redundant task termination operations, forall-coarsening replaces expensive task creation and termination operations with more efficient synchronization operations, and loop-chunking extracts useful parallelism from ideal parallelism. All three optimizations are specified in an iterative transformation framework that applies a sequence of relevant transformations until a fixed point is reached. Further, we discuss the impact of exception semantics on the specified transformations, and extend them to handle task-parallel programs with precise exception semantics. Experimental results were obtained for a collection of task-parallel benchmarks on three multicore platforms: a dual-socket 128-thread (16-core) Niagara T2 system, a quad-socket 16-core Intel Xeon SMP, and a quad-socket 32-core Power7 SMP. We have observed that the proposed optimizations interact with each other in a synergistic way, and result in an overall geometric average performance improvement between 6.28× and 10.30×, measured across all three platforms for the benchmarks studied.", "title": "" }, { "docid": "971398019db2fb255769727964f1e38a", "text": "Scaling down to deep submicrometer (DSM) technology has made noise a metric of equal importance as compared to power, speed, and area. Smaller feature size, lower supply voltage, and higher frequency are some of the characteristics for DSM circuits that make them more vulnerable to noise. New designs and circuit techniques are required in order to achieve robustness in presence of noise. Novel methodologies for designing energy-efficient noise-tolerant exclusive-OR-exclusive- NOR circuits that can operate at low-supply voltages with good signal integrity and driving capability are proposed. The circuits designed, after applying the proposed methodologies, are characterized and compared with previously published circuits for reliability, speed and energy efficiency. To test the driving capability of the proposed circuits, they are embedded in an existing 5-2 compressor design. The average noise threshold energy (ANTE) is used for quantifying the noise immunity of the proposed circuits. Simulation results show that, compared with the best available circuit in literature, the proposed circuits exhibit better noise-immunity, lower power-delay product (PDP) and good driving capability. All of the proposed circuits prove to be faster and successfully work at all ranges of supply voltage starting from 3.3 V down to 0.6 V. The savings in the PDP range from 94% to 21% for the given supply voltage range respectively and the average improvement in the ANTE is 2.67X.", "title": "" }, { "docid": "87fe73a5bc0b80fd0af1d0e65d1039c1", "text": "Reactive programming improves the design of reactive applications by relocating the logic for managing dependencies between dependent values away from the application logic to the language implementation. Many distributed applications are reactive. Yet, existing change propagation algorithms are not suitable in a distributed setting.\n We propose Distributed REScala, a reactive language with a change propagation algorithm that works without centralized knowledge about the topology of the dependency structure among reactive values and avoids unnecessary propagation of changes, while retaining safety guarantees (glitch freedom). Distributed REScala enables distributed reactive programming, bringing the benefits of reactive programming to distributed applications. We demonstrate the enabled design improvements by a case study. We also empirically evaluate the performance of our algorithm in comparison to other algorithms in a simulated distributed setting.", "title": "" }, { "docid": "f12749ba8911e8577fbde2327c9dc150", "text": "Regardless of successful applications of the convolutional neural networks (CNNs) in different fields, its application to seismic waveform classification and first-break (FB) picking has not been explored yet. This letter investigates the application of CNNs for classifying time-space waveforms from seismic shot gathers and picking FBs of both direct wave and refracted wave. We use representative subimage samples with two types of labeled waveform classification to supervise CNNs training. The goal is to obtain the optimal weights and biases in CNNs, which are solved by minimizing the error between predicted and target label classification. The trained CNNs can be utilized to automatically extract a set of time-space attributes or features from any subimage in shot gathers. These attributes are subsequently inputted to the trained fully connected layer of CNNs to output two values between 0 and 1. Based on the two-element outputs, a discriminant score function is defined to provide a single indication for classifying input waveforms. The FB is then located from the calculated score maps by sequentially using a threshold, the first local minimum rule of every trace and a median filter. Finally, we adopt synthetic and real shot data examples to demonstrate the effectiveness of CNNs-based waveform classification and FB picking. The results illustrate that CNN is an efficient automatic data-driven classifier and picker.", "title": "" }, { "docid": "11c117d839be466c369274f021caba13", "text": "Android smartphones are becoming increasingly popular. The open nature of Android allows users to install miscellaneous applications, including the malicious ones, from third-party marketplaces without rigorous sanity checks. A large portion of existing malwares perform stealthy operations such as sending short messages, making phone calls and HTTP connections, and installing additional malicious components. In this paper, we propose a novel technique to detect such stealthy behavior. We model stealthy behavior as the program behavior that mismatches with user interface, which denotes the user's expectation of program behavior. We use static program analysis to attribute a top level function that is usually a user interaction function with the behavior it performs. Then we analyze the text extracted from the user interface component associated with the top level function. Semantic mismatch of the two indicates stealthy behavior. To evaluate AsDroid, we download a pool of 182 apps that are potentially problematic by looking at their permissions. Among the 182 apps, AsDroid reports stealthy behaviors in 113 apps, with 28 false positives and 11 false negatives.", "title": "" }, { "docid": "f672df401b24571f81648066b3181890", "text": "We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models’ operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.", "title": "" }, { "docid": "0403bb8e2b96e3ad1ebfbbc0fa9434a7", "text": "Sarcasm detection from text has gained increasing attention. While one thread of research has emphasized the importance of affective content in sarcasm detection, another avenue of research has explored the effectiveness of word representations. In this paper, we introduce a novel model for automated sarcasm detection in text, called Affective Word Embeddings for Sarcasm (AWES), which incorporates affective information into word representations. Extensive evaluation on sarcasm detection on six datasets across three domains of text (tweets, reviews and forum posts) demonstrates the effectiveness of the proposed model. The experimental results indicate that while sentiment affective representations yield best results on datasets comprising of short length text such as tweets, richer representations derived from fine-grained emotions are more suitable for detecting sarcasm from longer length documents such as product reviews and discussion forum posts.", "title": "" }, { "docid": "107b95c3bb00c918c73d82dd678e46c0", "text": "Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).", "title": "" }, { "docid": "473f80115b7fa9979d6d6ffa2995c721", "text": "Context Olive oil, the main fat in the Mediterranean diet, contains polyphenols, which have antioxidant properties and may affect serum lipid levels. Contribution The authors studied virgin olive oil (high in polyphenols), refined olive oil (low in polyphenols), and a mixture of the 2 oils in equal parts. Two hundred healthy young men consumed 25 mL of an olive oil daily for 3 weeks followed by the other olive oils in a randomly assigned sequence. Olive oils with greater polyphenol content increased high-density lipoprotein (HDL) cholesterol levels and decreased serum markers of oxidation. Cautions The increase in HDL cholesterol level was small. Implications Virgin olive oil might have greater health benefits than refined olive oil. The Editors Polyphenol intake has been associated with low cancer and coronary heart disease (CHD) mortality rates (1). Antioxidant and anti-inflammatory properties and improvements in endothelial dysfunction and the lipid profile have been reported for dietary polyphenols (2). Studies have recently suggested that Mediterranean health benefits may be due to a synergistic combination of phytochemicals and fatty acids (3). Olive oil, rich in oleic acid (a monounsaturated fatty acid), is the main fat of the Mediterranean diet (4). To date, most of the protective effect of olive oil within the Mediterranean diet has been attributed to its high monounsaturated fatty acid content (5). However, if the effect of olive oil can be attributed solely to its monounsaturated fatty acid content, any type of olive oil, rapeseed or canola oil, or monounsaturated fatty acidenriched fat would provide similar health benefits. Whether the beneficial effects of olive oil on the cardiovascular system are exclusively due to oleic acid remains to be elucidated. The minor components, particularly the phenolic compounds, in olive oil may contribute to the health benefits derived from the Mediterranean diet. Among olive oils usually present on the market, virgin olive oils produced by direct-press or centrifugation methods have higher phenolic content (150 to 350 mg/kg of olive oil) (6). In experimental studies, phenolic compounds in olive oil showed strong antioxidant properties (7, 8). Oxidized low-density lipoprotein (LDL) is currently thought to be more damaging to the arterial wall than native LDL cholesterol (9). Results of randomized, crossover, controlled clinical trials on the antioxidant effect of polyphenols from real-life daily doses of olive oil in humans are, however, conflicting (10). Growing evidence suggests that dietary phenols (1115) and plant-based diets (16) can modulate lipid and lipoprotein metabolism. The Effect of Olive Oil on Oxidative Damage in European Populations (EUROLIVE) Study is a multicenter, randomized, crossover, clinical intervention trial that aims to assess the effect of sustained daily doses of olive oil, as a function of its phenolic content, on the oxidative damage to lipid and LDL cholesterol levels and the lipid profile as cardiovascular risk factors. Methods Participants We recruited healthy men, 20 to 60 years of age, from 6 European cities through newspaper and university advertisements. Of the 344 persons who agreed to be screened, 200 persons were eligible (32 men from Barcelona, Spain; 33 men from Copenhagen, Denmark; 30 men from Kuopio, Finland; 31 men from Bologna, Italy; 40 men from Postdam, Germany; and 34 men from Berlin, Germany) and were enrolled from September 2002 through June 2003 (Figure 1). Participants were eligible for study inclusion if they provided written informed consent, were willing to adhere to the protocol, and were in good health. We preselected volunteers when clinical record, physical examination, and blood pressure were strictly normal and the candidate was a nonsmoker. Next, we performed a complete blood count, biochemical laboratory analyses, and urinary dipstick tests to measure levels of serum glucose, total cholesterol, creatinine, alanine aminotransferase, and triglycerides. We included candidates with values within the reference range. Exclusion criteria were smoking; use of antioxidant supplements, aspirin, or drugs with established antioxidant properties; hyperlipidemia; obesity; diabetes; hypertension; intestinal disease; or any other disease or condition that would impair adherence. We excluded women to avoid the possible interference of estrogens, which are considered to be potential antioxidants (17). All participants provided written informed consent, and the local institutional ethics committees approved the protocol. Figure 1. Study flow diagram. Sequence of olive oil administration: 1) high-, medium-, and low-polyphenol olive oil; 2) medium-, low-, and high-polyphenol olive oil; and 3) low-, high-, and medium-polyphenol olive oil. Design and Study Procedure The trial was a randomized, crossover, controlled study. We randomly assigned participants consecutively to 1 of 3 sequences of olive oil administration. Participants received a daily dose of 25 mL (22 g) of 3 olive oils with high (366 mg/kg), medium (164 mg/kg), and low (2.7 mg/kg) polyphenol content (Figure 1) in replacement of other raw fats. Sequences were high-, medium-, and low-polyphenol olive oil (sequence 1); medium-, low-, and high-polyphenol olive oil (sequence 2); and low-, high-, and medium-polyphenol olive oil (sequence 3). In the coordinating center, we prepared random allocation to each sequence, taken from a Latin square, for each center by blocks of 42 participants (14 persons in each sequence), using specific software that was developed at the Municipal Institute for Medical Research, Barcelona, Spain (Aleator, Municipal Institute for Medical Research). The random allocation was faxed to the participating centers upon request for each individual included in the study. Treatment containers were assigned a code number that was concealed from participants and investigators, and the coordinating center disclosed the code number only after completion of statistical analyses. Olive oils were specially prepared for the trial. We selected a virgin olive oil with high natural phenolic content (366 mg/kg) and measured its fatty acid and vitamin E composition. We tested refined olive oil harvested from the same cultivar and soil to find an olive oil with similar quantities of fatty acid and a similar micronutrient profile. Vitamin E was adjusted to values similar to those of the selected virgin olive oil. Because phenolic compounds are lost in the refinement process, the refined olive oil had a low phenolic content (2.7 mg/kg). By mixing virgin and refined olive oil, we obtained an olive oil with an intermediate phenolic content (164 mg/kg). Olive oils did not differ in fat and micronutrient composition (that is, vitamin E, triterpenes, and sitosterols), with the exception of phenolic content. Three-week interventions were preceded by 2-week washout periods, in which we requested that participants avoid olive and olive oil consumption. We chose the 2-week washout period to reach the equilibrium in the plasma lipid profile because longer intervention periods with fat-rich diets did not modify the lipid concentrations (18). Daily doses of 25 mL of olive oil were blindly prepared in containers delivered to the participants at the beginning of each intervention period. We instructed participants to return the 21 containers at the end of each intervention period so that the daily amount of unconsumed olive oil could be registered. Dietary Adherence We measured tyrosol and hydroxytyrosol, the 2 major phenolic compounds in olive oil as simple forms or conjugates (7), by gas chromatography and mass spectrometry in 24-hour urine before and after each intervention period as biomarkers of adherence to the type of olive oil ingested. We asked participants to keep a 3-day dietary record at baseline and after each intervention period. We requested that participants in all centers avoid a high intake of foods that contain antioxidants (that is, vegetables, legumes, fruits, tea, coffee, chocolate, wine, and beer). A nutritionist also personally advised participants to replace all types of habitually consumed raw fats with the olive oils (for example, spread the assigned olive oil on bread instead of butter, put the assigned olive oil on boiled vegetables instead of margarine, and use the assigned olive oil on salads instead of other vegetable oils or standard salad dressings). Data Collection Main outcome measures were changes in biomarkers of oxidative damage to lipids. Secondary outcomes were changes in lipid levels and in biomarkers of the antioxidant status of the participants. We assessed outcome measures at the beginning of the study (baseline) and before (preintervention) and after (postintervention) each olive oil intervention period. We collected blood samples at fasting state together with 24-hour urine and recorded anthropometric variables. We measured blood pressure with a mercury sphygmomanometer after at least a 10-minute rest in the seated position. We recorded physical activity at baseline and at the end of the study and assessed it by using the Minnesota Leisure Time Physical Activity Questionnaire (19). We measured 1) glucose and lipid profile, including serum glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, and triglyceride levels determined by enzymatic methods (2023) and LDL cholesterol levels calculated by the Friedewald formula; 2) oxidative damage to lipids, including plasma-circulating oxidized LDL measured by enzyme immunoassay, plasma total F2-isoprostanes determined by using high-performance liquid chromatography and stable isotope-dilution and mass spectrometry, plasma C18 hydroxy fatty acids measured by gas chromatography and mass spectrometry, and serum LDL cholesterol uninduced conjugated dienes measured by spectrophotometry and adjusted for the cholesterol concentration in LDL cholesterol levels; 3) antioxidant sta", "title": "" }, { "docid": "19ee4367e4047f45b60968e3374cae7a", "text": "BACKGROUND\nFusion zones between superficial fascia and deep fascia have been recognized by surgical anatomists since 1938. Anatomical dissection performed by the author suggested that additional superficial fascia fusion zones exist.\n\n\nOBJECTIVES\nA study was performed to evaluate and define fusion zones between the superficial and the deep fascia.\n\n\nMETHODS\nDissection of fresh and minimally preserved cadavers was performed using the accepted technique for defining anatomic spaces: dye injection combined with cross-sectional anatomical dissection.\n\n\nRESULTS\nThis study identified bilaminar membranes traveling from deep to superficial fascia at consistent locations in all specimens. These membranes exist as fusion zones between superficial and deep fascia, and are referred to as SMAS fusion zones.\n\n\nCONCLUSIONS\nNerves, blood vessels and lymphatics transition between the deep and superficial fascia of the face by traveling along and within these membranes, a construct that provides stability and minimizes shear. Bilaminar subfascial membranes continue into the subcutaneous tissues as unilaminar septa on their way to skin. This three-dimensional lattice of interlocking horizontal, vertical, and oblique membranes defines the anatomic boundaries of the fascial spaces as well as the deep and superficial fat compartments of the face. This information facilitates accurate volume augmentation; helps to avoid facial nerve injury; and provides the conceptual basis for understanding jowls as a manifestation of enlargement of the buccal space that occurs with age.", "title": "" }, { "docid": "28b796954834230a0e8218e24bab0d35", "text": "Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).", "title": "" }, { "docid": "77f8f90edd85f1af6de8089808153dd7", "text": "Distributed coding is a new paradigm for video compression, based on Slepian and Wolf's and Wyner and Ziv's information-theoretic results from the 1970s. This paper reviews the recent development of practical distributed video coding schemes. Wyner-Ziv coding, i.e., lossy compression with receiver side information, enables low-complexity video encoding where the bulk of the computation is shifted to the decoder. Since the interframe dependence of the video sequence is exploited only at the decoder, an intraframe encoder can be combined with an interframe decoder. The rate-distortion performance is superior to conventional intraframe coding, but there is still a gap relative to conventional motion-compensated interframe coding. Wyner-Ziv coding is naturally robust against transmission errors and can be used for joint source-channel coding. A Wyner-Ziv MPEG encoder that protects the video waveform rather than the compressed bit stream achieves graceful degradation under deteriorating channel conditions without a layered signal representation.", "title": "" }, { "docid": "aec48ddea7f21cabb9648eec07c31dcd", "text": "High voltage Marx generator implementation using IGBT (Insulated Gate Bipolar Transistor) stacks is proposed in this paper. To protect the Marx generator at the moment of breakdown, AOCP (Active Over-Current Protection) part is included. The Marx generator is composed of 12 stages and each stage is made of IGBT stacks, two diode stacks, and capacitors. IGBT stack is used as a single switch. Diode stacks and inductors are used to charge the high voltage capacitor at each stage without power loss. These are also used to isolate input and high voltage negative output in high voltage generation mode. The proposed Marx generator implementation uses IGBT stack with a simple driver and has modular design. This system structure gives compactness and easiness to implement the total system. Some experimental and simulated results are included to verify the system performances in this paper.", "title": "" }, { "docid": "d0b16a75fb7b81c030ab5ce1b08d8236", "text": "It is unquestionable that successive hardware generations have significantly improved GPU computing workload performance over the last several years. Moore's law and DRAM scaling have respectively increased single-chip peak instruction throughput by 3X and off-chip bandwidth by 2.2X from NVIDIA's GeForce 8800 GTX in November 2006 to its GeForce GTX 580 in November 2010. However, raw capability numbers typically underestimate the improvements in real application performance over the same time period, due to significant architectural feature improvements. To demonstrate the effects of architecture features and optimizations over time, we conducted experiments on a set of benchmarks from diverse application domains for multiple GPU architecture generations to understand how much performance has truly been improving for those workloads. First, we demonstrate that certain architectural features make a huge difference in the performance of unoptimized code, such as the inclusion of a general cache which can improve performance by 2-4× in some situations. Second, we describe what optimization patterns have been most essential and widely applicable for improving performance for GPU computing workloads across all architecture generations. Some important optimization patterns included data layout transformation, converting scatter accesses to gather accesses, GPU workload regularization, and granularity coarsening, each of which improved performance on some benchmark by over 20%, sometimes by a factor of more than 5×. While hardware improvements to baseline unoptimized code can reduce the speedup magnitude, these patterns remain important for even the most recent GPUs. Finally, we identify which added architectural features created significant new optimization opportunities, such as increased register file capacity or reduced bandwidth penalties for misaligned accesses, which increase performance by 2× or more in the optimized versions of relevant benchmarks.", "title": "" } ]
scidocsrr
391e6550267acdb9f833f7898eb65d00
Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs
[ { "docid": "cc0a875eca7237f786b81889f028f1f2", "text": "Online photo services such as Flickr and Zooomr allow users to share their photos with family, friends, and the online community at large. An important facet of these services is that users manually annotate their photos using so called tags, which describe the contents of the photo or provide additional contextual and semantical information. In this paper we investigate how we can assist users in the tagging phase. The contribution of our research is twofold. We analyse a representative snapshot of Flickr and present the results by means of a tag characterisation focussing on how users tags photos and what information is contained in the tagging. Based on this analysis, we present and evaluate tag recommendation strategies to support the user in the photo annotation task by recommending a set of tags that can be added to the photo. The results of the empirical evaluation show that we can effectively recommend relevant tags for a variety of photos with different levels of exhaustiveness of original tagging.", "title": "" }, { "docid": "edf560968135e9083bdc3d4c1ebc230f", "text": "We present a new keyword extraction algorithm that applies to a single document without using a corpus. Frequent terms are extracted first, then a set of co-occurrences between each term and the frequent terms, i.e., occurrences in the same sentences, is generated. Co-occurrence distribution shows importance of a term in the document as follows. If the probability distribution of co-occurrence between term a and the frequent terms is biased to a particular subset of frequent terms, then term a is likely to be a keyword. The degree of bias of a distribution is measured by the χ2-measure. Our algorithm shows comparable performance to tfidf without using a corpus.", "title": "" } ]
[ { "docid": "cf30e30d7683fd2b0dec2bd6cc354620", "text": "As online courses such as MOOCs become increasingly popular, there has been a dramatic increase for the demand for methods to facilitate this type of organisation. While resources for new courses are often freely available, they are generally not suitably organised into easily manageable units. In this paper, we investigate how state-of-the-art topic segmentation models can be utilised to automatically transform unstructured text into coherent sections, which are suitable for MOOCs content browsing. The suitability of this method with regards to course organisation is confirmed through experiments with a lecture corpus, configured explicitly according to MOOCs settings. Experimental results demonstrate the reliability and scalability of this approach over various academic disciplines. The findings also show that the topic segmentation model which used discourse cues displayed the best results overall.", "title": "" }, { "docid": "5474bdc85c226ab613ce281b239a5e6a", "text": "This paper summarizes theoretical framework and preliminary data for a planned action research project in a secondary education institution with the intention to improve teachers` digital skills and capacity for educational ICT use through establishment of a professional learning community and development of cooperation between the school and a university. This study aims to fill the gap of knowledge about how engaging in professional learning communities (PLC) fosters teachers` skills and confidence with ICT. Based on the theoretical assumptions and review of previous research, initial ideas are drafted for an action research project.", "title": "" }, { "docid": "5cdb99bf928039bd5377b3eca521d534", "text": "Thanks to advances in information and communication technologies, there is a prominent increase in the amount of information produced specifically in the form of text documents. In order to, effectively deal with this “information explosion” problem and utilize the huge amount of text databases, efficient and scalable tools and techniques are indispensable. In this study, text clustering which is one of the most important techniques of text mining that aims at extracting useful information by processing data in textual form is addressed. An improved variant of spherical K-Means (SKM) algorithm named multi-cluster SKM is developed for clustering high dimensional document collections with high performance and efficiency. Experiments were performed on several document data sets and it is shown that the new algorithm provides significant increase in clustering quality without causing considerable difference in CPU time usage when compared to SKM algorithm.", "title": "" }, { "docid": "f4b48bdf794bc0e5672cc9efb2c5b48b", "text": "In this paper, we formulate the deep residual network (ResNet) as a control problem of transport equation. In ResNet, the transport equation is solved along the characteristics. Based on this observation, deep neural network is closely related to the control problem of PDEs on manifold. We propose several models based on transport equation, Hamilton-Jacobi equation and Fokker-Planck equation. The discretization of these PDEs on point cloud is also discussed. keywords: Deep residual network; control problem; manifold learning; point cloud; transport equation; Hamilton-Jacobi equation 1 Deep Residual Network Deep convolution neural networks have achieved great successes in image classification. Recently, an approach of deep residual learning is proposed to tackle the degradation in the classical deep neural network [7, 8]. The deep residual network can be realized by adding shortcut connections in the classical CNN. A building block is shown in Fig. 1. Formally, a building block is defined as: y = F (x, {Wi}) + x. Here x and y are the input and output vectors of the layers. The function F (x, {Wi}) represents the residual mapping to be learned. In Fig. 1, F = W2 · σ(W1 · σ(x)) in which σ = ReLU ◦ BN denotes composition of ReLU and Batch-Normalization. ∗Department of Mathematics, Hong Kong University of Science & Technology, Hong Kong. Email: mazli@ust.hk. †Yau Mathematical Sciences Center, Tsinghua University, Beijing, China, 100084. Email: zqshi@tsinghua.edu.cn. 1 ar X iv :1 70 8. 05 11 5v 3 [ cs .I T ] 2 5 Ja n 20 18", "title": "" }, { "docid": "c3f81c5e4b162564b15be399b2d24750", "text": "Although memory performance benefits from the spacing of information at encoding, judgments of learning (JOLs) are often not sensitive to the benefits of spacing. The present research examines how practice, feedback, and instruction influence JOLs for spaced and massed items. In Experiment 1, in which JOLs were made after the presentation of each item and participants were given multiple study-test cycles, JOLs were strongly influenced by the repetition of the items, but there was little difference in JOLs for massed versus spaced items. A similar effect was shown in Experiments 2 and 3, in which participants scored their own recall performance and were given feedback, although participants did learn to assign higher JOLs to spaced items with task experience. In Experiment 4, after participants were given direct instruction about the benefits of spacing, they showed a greater difference for JOLs of spaced vs massed items, but their JOLs still underestimated their recall for spaced items. Although spacing effects are very robust and have important implications for memory and education, people often underestimate the benefits of spaced repetition when learning, possibly due to the reliance on processing fluency during study and attending to repetition, and not taking into account the beneficial aspects of study schedule.", "title": "" }, { "docid": "560c2e21bc72cb75b1a802939cc1fd40", "text": "Social comparison theory maintains that people think about themselves compared with similar others. Those in one culture, then, compare themselves with different others and standards than do those in another culture, thus potentially confounding cross-cultural comparisons. A pilot study and Study 1 demonstrated the problematic nature of this reference-group effect: Whereas cultural experts agreed that East Asians are more collectivistic than North Americans, cross-cultural comparisons of trait and attitude measures failed to reveal such a pattern. Study 2 found that manipulating reference groups enhanced the expected cultural differences, and Study 3 revealed that people from different cultural backgrounds within the same country exhibited larger differences than did people from different countries. Cross-cultural comparisons using subjective Likert scales are compromised because of different reference groups. Possible solutions are discussed.", "title": "" }, { "docid": "32b4b275dc355dff2e3e168fe6355772", "text": "The management of coupon promotions is an important issue for marketing managers since it still is the major promotion medium. However, the distribution of coupons does not go without problems. Although manufacturers and retailers are investing heavily in the attempt to convince as many customers as possible, overall coupon redemption rate is low. This study improves the strategy of retailers and manufacturers concerning their target selection since both parties often end up in a battle for customers. Two separate models are built: one model makes predictions concerning redemption behavior of coupons that are distributed by the retailer while another model does the same for coupons handed out by manufacturers. By means of the feature-selection technique ‘Relief-F’ the dimensionality of the models is reduced, since it searches for the variables that are relevant for predicting the outcome. In this way, redundant variables are not used in the model-building process. The model is evaluated on real-life data provided by a retailer in FMCG. The contributions of this study for retailers as well as manufacturers are threefold. First, the possibility to classify customers concerning their coupon usage is shown. In addition, it is demonstrated that retailers and manufacturers can stay clear of each other in their marketing campaigns. Finally, the feature-selection technique ‘Relief-F’ proves to facilitate and optimize the performance of the models.", "title": "" }, { "docid": "628840e66a3ea91e75856b7ae43cb9bb", "text": "Optimal shape design of structural elements based on boundary variations results in final designs that are topologically equivalent to the initial choice of design, and general, stable computational schemes for this approach often require some kind of remeshing of the finite element approximation of the analysis problem. This paper presents a methodology for optimal shape design where both these drawbacks can be avoided. The method is related to modern production techniques and consists of computing the optimal distribution in space of an anisotropic material that is constructed by introducing an infimum of periodically distributed small holes in a given homogeneous, i~otropic material, with the requirement that the resulting structure can carry the given loads as well as satisfy other design requirements. The computation of effective material properties for the anisotropic material is carried out using the method of homogenization. Computational results are presented and compared with results obtained by boundary variations.", "title": "" }, { "docid": "65a990303d1d6efd3aea5307e7db9248", "text": "The presentation of news articles to meet research needs has traditionally been a document-centric process. Yet users often want to monitor developing news stories based on an event, rather than by examining an exhaustive list of retrieved documents. In this work, we illustrate a news retrieval system, eventNews, and an underlying algorithm which is event-centric. Through this system, news articles are clustered around a single news event or an event and its sub-events. The algorithm presented can leverage the creation of new Reuters stories and their compact labels as seed documents for the clustering process. The system is configured to generate top-level clusters for news events based on an editorially supplied topical label, known as a ‘slugline,’ and to generate sub-topic-focused clusters based on the algorithm. The system uses an agglomerative clustering algorithm to gather and structure documents into distinct result sets. Decisions on whether to merge related documents or clusters are made according to the similarity of evidence derived from two distinct sources, one, relying on a digital signature based on the unstructured text in the document, the other based on the presence of named entity tags that have been assigned to the document by a named entity tagger, in this case Thomson Reuters’ Calais engine. Copyright c © 2016 for the individual papers by the paper’s authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors. In: M. Martinez, U. Kruschwitz, G. Kazai, D. Corney, F. Hopfgartner, R. Campos and D. Albakour (eds.): Proceedings of the NewsIR’16 Workshop at ECIR, Padua, Italy, 20-March-2016, published at http://ceur-ws.org", "title": "" }, { "docid": "3ea7700a4fff166c1a5bc8c6c5aa3ade", "text": "ion-Based Intrusion Detection The implementation of many misuse detection approaches shares a common problem: Each system is written for a single environment and has proved difficult to use in other environments that may have similar policies and concerns. The primary goal of abstraction-based intrusion detection is to address this problem.", "title": "" }, { "docid": "32a964bd36770b8c50a0e74289f4503b", "text": "Several competing human behavior models have been proposed to model and protect against boundedly rational adversaries in repeated Stackelberg security games (SSGs). However, these existing models fail to address three main issues which are extremely detrimental to defender performance. First, while they attempt to learn adversary behavior models from adversaries’ past actions (“attacks on targets”), they fail to take into account adversaries’ future adaptation based on successes or failures of these past actions. Second, they assume that sufficient data in the initial rounds will lead to a reliable model of the adversary. However, our analysis reveals that the issue is not the amount of data, but that there just is not enough of the attack surface exposed to the adversary to learn a reliable model. Third, current leading approaches have failed to include probability weighting functions, even though it is well known that human beings’ weighting of probability is typically nonlinear. The first contribution of this paper is a new human behavior model, SHARP, which mitigates these three limitations as follows: (i) SHARP reasons based on success or failure of the adversary’s past actions on exposed portions of the attack surface to model adversary adaptiveness; (ii) SHARP reasons about similarity between exposed and unexposed areas of the attack surface, and also incorporates a discounting parameter to mitigate adversary’s lack of exposure to enough of the attack surface; and (iii) SHARP integrates a non-linear probability weighting function to capture the adversary’s true weighting of probability. Our second contribution is a first “longitudinal study” – at least in the context of SSGs – of competing models in settings involving repeated interaction between the attacker and the defender. This study, where each experiment lasted a period of multiple weeks with individual sets of human subjects, illustrates the strengths and weaknesses of different models and shows the advantages of SHARP.", "title": "" }, { "docid": "680e9f3b5aeb02822c8889044517f2ec", "text": "Currently, there are many large, automatically constructed knowledge bases (KBs). One interesting task is learning from a knowledge base to generate new knowledge either in the form of inferred facts or rules that define regularities. One challenge for learning is that KBs are necessarily open world: we cannot assume anything about the truth values of tuples not included in the KB. When a KB only contains facts (i.e., true statements), which is typically the case, we lack negative examples, which are often needed by learning algorithms. To address this problem, we propose a novel score function for evaluating the quality of a first-order rule learned from a KB. Our metric attempts to include information about the tuples not in the KB when evaluating the quality of a potential rule. Empirically, we find that our metric results in more precise predictions than previous approaches.", "title": "" }, { "docid": "148f306c8c9a4170afcdc8a0b6ff902c", "text": "Word vectors require significant amounts of memory and storage, posing issues to resource limited devices like mobile phones and GPUs. We show that high quality quantized word vectors using 1-2 bits per parameter can be learned by introducing a quantization function into Word2Vec. We furthermore show that training with the quantization function acts as a regularizer. We train word vectors on English Wikipedia (2017) and evaluate them on standard word similarity and analogy tasks and on question answering (SQuAD). Our quantized word vectors not only take 8-16x less space than full precision (32 bit) word vectors but also outperform them on word similarity tasks and question answering.", "title": "" }, { "docid": "32200036224dab6e3a165376a1c7a254", "text": "Modern graphics accelerators have embedded programmable components in the form of vertex and fragment shading units. Current APIs permit specification of the programs for these components using an assembly-language level interface. Compilers for high-level shading languages are available but these read in an external string specification, which can be inconvenient.It is possible, using standard C++, to define a high-level shading language directly in the API. Such a language can be nearly indistinguishable from a special-purpose shading language, yet permits more direct interaction with the specification of textures and parameters, simplifies implementation, and enables on-the-fly generation, manipulation, and specialization of shader programs. A shading language built into the API also permits the lifting of C++ host language type, modularity, and scoping constructs into the shading language without any additional implementation effort.", "title": "" }, { "docid": "0950052c92b4526c253acc0d4f0f45a0", "text": "Pictogram communication is successful when participants at both end of the communication channel share a common pictogram interpretation. Not all pictograms carry universal interpretation, however; the issue of ambiguous pictogram interpretation must be addressed to assist pictogram communication. To unveil the ambiguity possible in pictogram interpretation, we conduct a human subject experiment to identify culture-specific criteria employed by humans by detecting cultural differences in pictogram interpretations. Based on the findings, we propose a categorical semantic relevance measure which calculates how relevant a pictogram is to a given interpretation in terms of a given pictogram category. The proposed measure is applied to categorized pictogram interpretations to enhance pictogram retrieval performance. The WordNet, the ChaSen, and the EDR Electronic Dictionary registered to the Language Grid are utilized to merge synonymous pictogram interpretations and to categorize pictogram interpretations into super-concept categories. We show how the Language Grid can assist the crosscultural research process.", "title": "" }, { "docid": "818862fc058767caa026ca58d0b9b1d2", "text": "This paper proposes a novel outer-rotor flux-switching permanent-magnet (OR-FSPM) machine with specific wedge-shaped magnets for in-wheel light-weight traction applications. First, the geometric topology is introduced. Then, the combination principle of stator slots and rotor poles for OR-FSPM machines is investigated. Furthermore, to demonstrate the relationship between performance specifications (e.g., torque and speed) and key design parameters and dimensions (e.g., rotor outer diameter and stack length) of OR-FSPM machines at preliminary design stage, an analytical torque-sizing equation is proposed and verified by two-dimensional (2-D) finite-element analysis (FEA). Moreover, optimizations of key dimensions are conducted on an initially designed proof-of-principle three-phase 12-stator-slot/22-rotor-pole prototyped machine. Then, based on 2-D-FEA, a comprehensive comparison between a pair of OR-FSPM machines with rectangular- and wedge-shaped magnets and a surface-mounted permanent-magnet (SPM) machine is performed. The results indicate that the proposed OR-FSPM machine with wedge-shaped magnets exhibits better flux-weakening capability, higher efficiency, and wider speed range than the counterparts, especially for torque capability, where the proposed wedge-shaped magnets-based one could produce 40% and 61.5% more torque than the rectangular-shaped magnets-based machine and SPM machine, respectively, with the same rated current density (5 A/mm2). Finally, the predicted performance of the proposed OR-FSPM machine is verified by experiments on a prototyped machine.", "title": "" }, { "docid": "ef264055e4bb6e6205e92ba6ed38d7bd", "text": "3D printing or additive manufacturing is a novel method of manufacturing parts directly from digital model using layer-by-layer material build-up approach. This tool-less manufacturing method can produce fully dense metallic parts in short time, with high precision. Features of additive manufacturing like freedom of part design, part complexity, light weighting, part consolidation, and design for function are garnering particular interests in metal additive manufacturing for aerospace, oil and gas, marine, and automobile applications. Powder bed fusion, in which each powder bed layer is selectively fused using energy source like laser, is the most promising additive manufacturing technology that can be used for manufacturing small, low-volume, complex metallic parts. This review presents overview of 3D Printing technologies, materials, applications, advantages, disadvantages, challenges, economics, and applications of 3D metal printing technology, the DMLS process in detail, and also 3D metal printing perspectives in developing countries.", "title": "" }, { "docid": "14fe7deaece11b3d4cd4701199a18599", "text": "\"Natively unfolded\" proteins occupy a unique niche within the protein kingdom in that they lack ordered structure under conditions of neutral pH in vitro. Analysis of amino acid sequences, based on the normalized net charge and mean hydrophobicity, has been applied to two sets of proteins: small globular folded proteins and \"natively unfolded\" ones. The results show that \"natively unfolded\" proteins are specifically localized within a unique region of charge-hydrophobicity phase space and indicate that a combination of low overall hydrophobicity and large net charge represent a unique structural feature of \"natively unfolded\" proteins.", "title": "" }, { "docid": "eee9f9e1e8177b68a278eab025dae84b", "text": "Herzberg et al. (1959) developed “Two Factors theory” to focus on working conditions necessary for employees to be motivated. Since Herzberg examined only white collars in his research, this article reviews later studies on motivation factors of blue collar workers verses white collars and suggests some hypothesis for further researches.", "title": "" } ]
scidocsrr
f41d9f52dfcfb9e9ca09ee42919d838d
Relationship between visual-motor integration and handwriting skills of children in kindergarten: a modified replication study.
[ { "docid": "b8909da12187cc3c1b9bc428371fc795", "text": "Among various perceptual-motor tests, only visuomotor integration was significant in predicting accuracy of handwriting performance for the total sample of 59 children consisting of 19 clumsy children, 22 nonclumsy dysgraphic children, and 18 'normal' children. They were selected from a sample of 360 fourth-graders (10-yr.-olds). For groups of clumsy and 'normal' children, the prediction of handwriting performance is difficult. However, correlations among scores on 6 measures showed that handwriting was significantly related to visuomotor integration, visual form perception, and tracing in the total group and to visuomotor integration and visual form perception in the clumsy group. The weakest correlations occurred between tests measuring simple psychomotor functions and handwriting. Moreover, clumsy children were expected to do poorly on tests measuring aiming, tracing, and visuomotor integration, but not on tests measuring visual form perception and finger tapping. Dysgraphic children were expected to do poorly on visuomotor integration only.", "title": "" } ]
[ { "docid": "3c4219212dfeb01d2092d165be0cfb44", "text": "Classical substrate noise analysis considers the silicon resistivity of an integrated circuit only as doping dependent besides neglecting diffusion currents as well. In power circuits minority carriers are injected into the substrate and propagate by drift–diffusion. In this case the conductivity of the substrate is spatially modulated and this effect is particularly important in high injection regime. In this work a description of the coupling between majority and minority drift–diffusion currents is presented. A distributed model of the substrate is then proposed to take into account the conductivity modulation and its feedback on diffusion processes. The model is expressed in terms of equivalent circuits in order to be fully compatible with circuit simulators. The simulation results are then discussed for diodes and bipolar transistors and compared to the ones obtained from physical device simulations and measurements. 2014 Published by Elsevier Ltd.", "title": "" }, { "docid": "3fd2f7e4d0d0460fda7f7e947e45d9d9", "text": "Because of the complexity of the hospital environment, there exist a lot of medical information systems from different vendors with incompatible structures. In order to establish an enterprise hospital information system, the integration among these heterogeneous systems must be considered. Complete integration should cover three aspects: data integration, function integration and workflow integration. However most of the previous design of architecture did not accomplish such a complete integration. This article offers an architecture design of the enterprise hospital information system based on the concept of digital neural network system in hospital. It covers all three aspects of integration, and eventually achieves the target of one virtual data center with Enterprise Viewer for users of different roles. The initial implementation of the architecture in the 5-year Digital Hospital Project in Huzhou Central hospital of Zhejiang Province is also described", "title": "" }, { "docid": "92cafadc922255249108ce4a0dad9b98", "text": "Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5% over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2× faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices.", "title": "" }, { "docid": "14c4e051a23576b33507c453d7e0fe84", "text": "There is a growing interest in subspace learning techniques for face recognition; however, the excessive dimension of the data space often brings the algorithms into the curse of dimensionality dilemma. In this paper, we present a novel approach to solve the supervised dimensionality reduction problem by encoding an image object as a general tensor of second or even higher order. First, we propose a discriminant tensor criterion, whereby multiple interrelated lower dimensional discriminative subspaces are derived for feature extraction. Then, a novel approach, called k-mode optimization, is presented to iteratively learn these subspaces by unfolding the tensor along different tensor directions. We call this algorithm multilinear discriminant analysis (MDA), which has the following characteristics: 1) multiple interrelated subspaces can collaborate to discriminate different classes, 2) for classification problems involving higher order tensors, the MDA algorithm can avoid the curse of dimensionality dilemma and alleviate the small sample size problem, and 3) the computational cost in the learning stage is reduced to a large extent owing to the reduced data dimensions in k-mode optimization. We provide extensive experiments on ORL, CMU PIE, and FERET databases by encoding face images as second- or third-order tensors to demonstrate that the proposed MDA algorithm based on higher order tensors has the potential to outperform the traditional vector-based subspace learning algorithms, especially in the cases with small sample sizes", "title": "" }, { "docid": "671eb73ad86525cb183e2b8dbfe09947", "text": "We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent’s experience. Because this loss is highly flexible in its ability to take into account the agent’s history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG’s learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular metalearning algorithms.", "title": "" }, { "docid": "0282e5fb426b7d471b7c78cf1d839a1d", "text": "The advancement of Internet-of-Things (IoT) edge devices with various types of sensors enables us to harness diverse information with Mobile Crowd-Sensing applications (MCS). This highly dynamic setting entails the collection of ubiquitous data traces, originating from sensors carried by people, introducing new information security challenges; one of them being the preservation of data trustworthiness. What is needed in these settings is the timely analysis of these large datasets to produce accurate insights on the correctness of user reports. Existing data mining and other artificial intelligence methods are the most popular to gain hidden insights from IoT data, albeit with many challenges. In this paper, we first model the cyber trustworthiness of MCS reports in the presence of intelligent and colluding adversaries. We then rigorously assess, using real IoT datasets, the effectiveness and accuracy of well-known data mining algorithms when employed towards IoT security and privacy. By taking into account the spatio-temporal changes of the underlying phenomena, we demonstrate how concept drifts can masquerade the existence of attackers and their impact on the accuracy of both the clustering and classification processes. Our initial set of results clearly show that these unsupervised learning algorithms are prone to adversarial infection, thus, magnifying the need for further research in the field by leveraging a mix of advanced machine learning models and mathematical optimization techniques.", "title": "" }, { "docid": "4ff2e867a47fa27a95e5c190136dd73a", "text": "Lack of trust is one of the most frequently cited reasons for consumers not purchasing from Internet vendors. During the last four years a number of empirical studies have investigated the role of trust in the specific context of e-commerce, focusing on different aspects of this multi-dimensional construct. However, empirical research in this area is beset by conflicting conceptualizations of the trust construct, inadequate understanding of the relationships between trust, its antecedents and consequents, and the frequent use of trust scales that are neither theoretically derived nor rigorously validated. The major objective of this paper is to provide an integrative review of the empirical literature on trust in e-commerce in order to allow cumulative analysis of results. The interpretation and comparison of different empirical studies on on-line trust first requires conceptual clarification. A set of trust constructs is proposed that reflects both institutional phenomena (system trust) and personal and interpersonal forms of trust (dispositional trust, trusting beliefs, trusting intentions and trust-related behaviours), thus facilitating a multi-level and multi-dimensional analysis of research problems related to trust in e-commerce. r 2003 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "e20ee5036b2b30a286b63a2e4452cb71", "text": "This work reported for the first time the anodic electrochemiluminescence (ECL) of CdTe quantum dots (QDs) in aqueous system and its analytical application based on the ECL energy transfer to analytes. The CdTe QDs were modified with mercaptopropionic acid to obtain water-soluble QDs and stable and intensive anodic ECL emission with a peak value at +1.17 V (vs Ag/AgCl) in pH 9.3 PBS at an indium tin oxide (ITO) electrode. The ECL emission was demonstrated to involve the participation of superoxide ion produced at the ITO surface, which could inject an electron into the 1Se quantum-confined orbital of CdTe to form QDs anions. The collision between these anions and the oxidation products of QDs led to the formation of the excited state of QDs and ECL emission. The ECL energy transfer from the excited CdTe QDs to quencher produced a novel methodology for detection of catechol derivatives. Using dopamine and L-adrenalin as model analytes, this ECL method showed wide linear ranges from 50 nM to 5 microM and 80 nM to 30 microM for these species. Both ascorbic acid and uric acid, which are common interferences, did not interfere with the detection of catechol derivatives in practical biological samples.", "title": "" }, { "docid": "3a5f539380cfe62e0175c31183bd4733", "text": "A single low cost inertial measurement unit (IMU) is often used in conjunction with GPS to increase the accuracy and improve the availability of the navigation solution for a pedestrian navigation system. This paper develops several fusion algorithms for using multiple IMUs to enhance performance. In particular, this research seeks to understand the benefits and detriments of each fusion method in the context of pedestrian navigation. Three fusion methods are proposed. First, all raw IMU measurements are mapped onto a common frame (i.e., a virtual frame) and processed in a typical combined GPS-IMU Kalman filter. Second, a large stacked filter is constructed of several IMUs. This filter construction allows for relative information between the IMUs to be used as updates. Third, a federated filter is used to process each IMU as a local filter. The output of each local filter is shared with a master filter, which in turn, shares information back with the local filters. The construction of each filter is discussed and improvements are made to the virtual IMU (VIMU) architecture, which is the most commonly used architecture in the literature. Since accuracy and availability are the most important characteristics of a pedestrian navigation system, the analysis of each filter's performance focuses on these two parameters. Data was collected in two environments, one where GPS signals are moderately attenuated and another where signals are severely attenuated. Accuracy is shown as a function of architecture and the number of IMUs used.", "title": "" }, { "docid": "83d06fa3fd9ccd5937fcca3403372b87", "text": "Hadoop RPC is the basic communication mechanism in the Hadoop ecosystem. It is used with other Hadoop components like MapReduce, HDFS, and HBase in real world data-centers, e.g. Facebook and Yahoo!. However, the current Hadoop RPC design is built on Java sockets interface, which limits its potential performance. The High Performance Computing community has exploited high throughput and low latency networks such as InfiniBand for many years. In this paper, we first analyze the performance of current Hadoop RPC design by unearthing buffer management and communication bottlenecks, that are not apparent on the slower speed networks. Then we propose a novel design (RPCoIB) of Hadoop RPC with RDMA over InfiniBand networks. RPCoIB provides a JVM-bypassed buffer management scheme and utilizes message size locality to avoid multiple memory allocations and copies in data serialization and deserialization. Our performance evaluations reveal that the basic ping-pong latencies for varied data sizes are reduced by 42%-49% and 46%-50% compared with 10GigE and IPoIB QDR (32Gbps), respectively, while the RPCoIB design also improves the peak throughput by 82% and 64% compared with 10GigE and IPoIB. As compared to default Hadoop over IPoIB QDR, our RPCoIB design improves the performance of the Sort benchmark on 64 compute nodes by 15%, while it improves the performance of CloudBurst application by 10%. We also present thorough, integrated evaluations of our RPCoIB design with other research directions, which optimize HDFS and HBase using RDMA over InfiniBand. Compared with their best performance, we observe 10% improvement for HDFS-IB, and 24% improvement for HBase-IB. To the best of our knowledge, this is the first such design of the Hadoop RPC system over high performance networks such as InfiniBand.", "title": "" }, { "docid": "203f34a946e00211ebc6fce8e2a061ed", "text": "We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.", "title": "" }, { "docid": "3ec63f1c1f74c5d11eaa9d360ceaac55", "text": "High-level shape understanding and technique evaluation on large repositories of 3D shapes often benefit from additional information known about the shapes. One example of such information is the semantic segmentation of a shape into functional or meaningful parts. Generating accurate segmentations with meaningful segment boundaries is, however, a costly process, typically requiring large amounts of user time to achieve high quality results. In this paper we present an active learning framework for large dataset segmentation, which iteratively provides the user with new predictions by training new models based on already segmented shapes. Our proposed pipeline consists of three novel components. First, we a propose a fast and relatively accurate feature-based deep learning model to provide datasetwide segmentation predictions. Second, we propose an information theory measure to estimate the prediction quality and for ordering subsequent fast and meaningful shape selection. Our experiments show that such suggestive ordering helps reduce users time and effort, produce high quality predictions, and construct a model that generalizes well. Finally, we provide effective segmentation refinement features to help the user quickly correct any incorrect predictions. We show that our framework is more accurate and in general more efficient than state-of-the-art, for massive dataset segmentation with while also providing consistent segment boundaries.", "title": "" }, { "docid": "77f094fc11ed42183c37048a8855f60c", "text": "In this paper, we propose an unsupervised feature learning method called deep binary descriptor with multi-quantization (DBD-MQ) for visual matching. Existing learning-based binary descriptors such as compact binary face descriptor (CBFD) and DeepBit utilize the rigid sign function for binarization despite of data distributions, thereby suffering from severe quantization loss. In order to address the limitation, our DBD-MQ considers the binarization as a multi-quantization task. Specifically, we apply a K-AutoEncoders (KAEs) network to jointly learn the parameters and the binarization functions under a deep learning framework, so that discriminative binary descriptors can be obtained with a fine-grained multi-quantization. Extensive experimental results on different visual analysis including patch retrieval, image matching and image retrieval show that our DBD-MQ outperforms most existing binary feature descriptors.", "title": "" }, { "docid": "d24980c1a1317c8dd055741da1b8c7a7", "text": "Influence Maximization (IM), which selects a set of <inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math> <alternatives><inline-graphic xlink:href=\"li-ieq1-2807843.gif\"/></alternatives></inline-formula> users (called seed set) from a social network to maximize the expected number of influenced users (called influence spread), is a key algorithmic problem in social influence analysis. Due to its immense application potential and enormous technical challenges, IM has been extensively studied in the past decade. In this paper, we survey and synthesize a wide spectrum of existing studies on IM from an <italic>algorithmic perspective</italic>, with a special focus on the following key aspects: (1) a review of well-accepted diffusion models that capture the information diffusion process and build the foundation of the IM problem, (2) a fine-grained taxonomy to classify existing IM algorithms based on their design objectives, (3) a rigorous theoretical comparison of existing IM algorithms, and (4) a comprehensive study on the applications of IM techniques in combining with novel context features of social networks such as topic, location, and time. Based on this analysis, we then outline the key challenges and research directions to expand the boundary of IM research.", "title": "" }, { "docid": "4681e8f07225e305adfc66cd1b48deb8", "text": "Collaborative work among students, while an important topic of inquiry, needs further treatment as we still lack the knowledge regarding obstacles that students face, the strategies they apply, and the relations among personal and group aspects. This article presents a diary study of 54 master’s students conducting group projects across four semesters. A total of 332 diary entries were analysed using the C5 model of collaboration that incorporates elements of communication, contribution, coordination, cooperation and collaboration. Quantitative and qualitative analyses show how these elements relate to one another for students working on collaborative projects. It was found that face-to-face communication related positively with satisfaction and group dynamics, whereas online chat correlated positively with feedback and closing the gap. Managing scope was perceived to be the most common challenge. The findings suggest the varying affordances and drawbacks of different methods of communication, collaborative work styles and the strategies of group members.", "title": "" }, { "docid": "5e0d5cf53369cc1065bdf0dedb74c557", "text": "The automatic detection of diseases in images acquired through chest X-rays can be useful in clinical diagnosis because of a shortage of experienced doctors. Compared with natural images, those acquired through chest X-rays are obtained by using penetrating imaging technology, such that there are multiple levels of features in an image. It is thus difficult to extract the features of a disease for further diagnosis. In practice, healthy people are in a majority and the morbidities of different disease vary, because of which the obtained labels are imbalanced. The two main challenges of diagnosis though chest X-ray images are to extract discriminative features from X-ray images and handle the problem of imbalanced data distribution. In this paper, we propose a deep neural network called DeepCXray that simultaneously solves these two problems. An InceptionV3 model is trained to extract features from raw images, and a new objective function is designed to address the problem of imbalanced data distribution. The proposed objective function is a performance index based on cross entropy loss that automatically weights the ratio of positive to negative samples. In other words, the proposed loss function can automatically reduce the influence of an overwhelming number of negative samples by shrinking each cross entropy terms by a different extent. Extensive experiments highlight the promising performance of DeepCXray on the ChestXray14 dataset of the National Institutes of Health in terms of the area under the receiver operating characteristic curve.", "title": "" }, { "docid": "54bcaafa495d6d778bddbbb5d5cf906e", "text": "Low-shot visual learning—the ability to recognize novel object categories from very few examples—is a hallmark of human visual intelligence. Existing machine learning approaches fail to generalize in the same way. To make progress on this foundational problem, we present a novel protocol to evaluate low-shot learning on complex images where the learner is permitted to first build a feature representation. Then, we propose and evaluate representation regularization techniques that improve the effectiveness of convolutional networks at the task of low-shot learning, leading to a 2x reduction in the amount of training data required at equal accuracy rates on the challenging ImageNet dataset.", "title": "" }, { "docid": "1177ddef815db481082feb75afd79ec5", "text": "This paper explores three main areas, firstly, website accessibility guidelines; secondly, website accessibility tools and finally the implication of human factors in the process of implementing successful e-Government websites. It investigates the issues that make a website accessible and explores the importance placed on web usability and accessibility with respect to e-Government websites. It briefly examines accessibility guidelines, evaluation methods and analysis tools. It then evaluates the web accessibility of e-Government websites of Saudi Arabia and Oman by adapting the ‘W3C Web Content Accessibility Guidelines’. Finally, it presents recommendations for improvement of e-Government website accessibility.", "title": "" }, { "docid": "c0d722d72955dd1ec6df3cc24289979f", "text": "Citing classic psychological research and a smattering of recent studies, Kassin, Dror, and Kukucka (2013) proposed the operation of a forensic confirmation bias, whereby preexisting expectations guide the evaluation of forensic evidence in a self-verifying manner. In a series of studies, we tested the hypothesis that knowing that a defendant had confessed would taint people's evaluations of handwriting evidence relative to those not so informed. In Study 1, participants who read a case summary in which the defendant had previously confessed were more likely to erroneously conclude that handwriting samples from the defendant and perpetrator were authored by the same person, and were more likely to judge the defendant guilty, compared with those in a no-confession control group. Study 2 replicated and extended these findings using a within-subjects design in which participants rated the same samples both before and after reading a case summary. These findings underscore recent critiques of the forensic sciences as subject to bias, and suggest the value of insulating forensic examiners from contextual information.", "title": "" }, { "docid": "659a9d6c876ffd7cfce5622736bde7ca", "text": "We study a novel problem of social context summarization for Web documents. Traditional summarization research has focused on extracting informative sentences from standard documents. With the rapid growth of online social networks, abundant user generated content (e.g., comments) associated with the standard documents is available. Which parts in a document are social users really caring about? How can we generate summaries for standard documents by considering both the informativeness of sentences and interests of social users? This paper explores such an approach by modeling Web documents and social contexts into a unified framework. We propose a dual wing factor graph (DWFG) model, which utilizes the mutual reinforcement between Web documents and their associated social contexts to generate summaries. An efficient algorithm is designed to learn the proposed factor graph model.Experimental results on a Twitter data set validate the effectiveness of the proposed model. By leveraging the social context information, our approach obtains significant improvement (averagely +5.0%-17.3%) over several alternative methods (CRF, SVM, LR, PR, and DocLead) on the performance of summarization.", "title": "" } ]
scidocsrr
55bd54d13a2ba4dd6ad7fd7d079f1b86
Logics for resource-bounded agents
[ { "docid": "4285d9b4b9f63f22033ce9a82eec2c76", "text": "To ease large-scale realization of agent applications there is an urgent need for frameworks, methodologies and toolkits that support the effective development of agent systems. Moreover, since one of the main tasks for which agent systems were invented is the integration between heterogeneous software, independently developed agents should be able to interact successfully. In this paper, we present JADE (Java Agent Development Environment), a software framework to build agent systems for the management of networked information resources in compliance with the FIPA specifications for inter-operable intelligent multi-agent systems. The goal of JADE is to simplify development while ensuring standard compliance through a comprehensive set of system services and agents. JADE can then be considered to be an agent middle-ware that implements an efficient agent platform and supports the development of multi-agent systems. It deals with all the aspects that are not peculiar to agent internals and that are independent of the applications, such as message transport, encoding and parsing, or agent life-cycle management. Copyright  2001 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "12866e003093bc7d89d751697f2be93c", "text": "We argue that the right way to understand distributed protocols is by considering how messages change the state of knowledge of a system. We present a hierarchy of knowledge states that a system may be in, and discuss how communication can move the system's state of knowledge of a fact up the hierarchy. Of special interest is the notion of common knowledge. Common knowledge is an essential state of knowledge for reaching agreements and coordinating action. We show that in practical distributed systems, common knowledge is not attainable. We introduce various relaxations of common knowledge that are attainable in many cases of interest. We describe in what sense these notions are appropriate, and discuss their relationship to each other. We conclude with a discussion of the role of knowledge in distributed systems.", "title": "" } ]
[ { "docid": "fdff78b32803eb13904c128d8e011ea8", "text": "The task of identifying when to take a conversational turn is an important function of spoken dialogue systems. The turn-taking system should also ideally be able to handle many types of dialogue, from structured conversation to spontaneous and unstructured discourse. Our goal is to determine how much a generalized model trained on many types of dialogue scenarios would improve on a model trained only for a specific scenario. To achieve this goal we created a large corpus of Wizard-of-Oz conversation data which consisted of several different types of dialogue sessions, and then compared a generalized model with scenario-specific models. For our evaluation we go further than simply reporting conventional metrics, which we show are not informative enough to evaluate turn-taking in a real-time system. Instead, we process results using a performance curve of latency and false cut-in rate, and further improve our model's real-time performance using a finite-state turn-taking machine. Our results show that the generalized model greatly outperformed the individual model for attentive listening scenarios but was worse in job interview scenarios. This implies that a model based on a large corpus is better suited to conversation which is more user-initiated and unstructured. We also propose that our method of evaluation leads to more informative performance metrics in a real-time system.", "title": "" }, { "docid": "f6647e82741dfe023ee5159bd6ac5be9", "text": "3D scene understanding is important for robots to interact with the 3D world in a meaningful way. Most previous works on 3D scene understanding focus on recognizing geometrical or semantic properties of a scene independently. In this work, we introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. Experiments conducted on real world and synthetic RGB-D videos demonstrate the superior performance of our method.", "title": "" }, { "docid": "766dd6c18f645d550d98f6e3e86c7b2f", "text": "Licorice root has been used for years to regulate gastrointestinal function in traditional Chinese medicine. This study reveals the gastrointestinal effects of isoliquiritigenin, a flavonoid isolated from the roots of Glycyrrhiza glabra (a kind of Licorice). In vivo, isoliquiritigenin produced a dual dose-related effect on the charcoal meal travel, inhibitory at the low doses, while prokinetic at the high doses. In vitro, isoliquiritigenin showed an atropine-sensitive concentration-dependent spasmogenic effect in isolated rat stomach fundus. However, a spasmolytic effect was observed in isolated rabbit jejunums, guinea pig ileums and atropinized rat stomach fundus, either as noncompetitive inhibition of agonist concentration-response curves, inhibition of high K(+) (80 mM)-induced contractions, or displacement of Ca(2+) concentration-response curves to the right, indicating a calcium antagonist effect. Pretreatment with N(omega)-nitro-L-arginine methyl ester (L-NAME; 30 microM), indomethacin (10 microM), methylene blue (10 microM), tetraethylammonium chloride (0.5 mM), glibenclamide (1 microM), 4-aminopyridine (0.1 mM), or clotrimazole (1 microM) did not inhibit the spasmolytic effect. These results indicate that isoliquiritigenin plays a dual role in regulating gastrointestinal motility, both spasmogenic and spasmolytic. The spasmogenic effect may involve the activating of muscarinic receptors, while the spasmolytic effect is predominantly due to blockade of the calcium channels.", "title": "" }, { "docid": "0022121142a2b3a2b627fcb1cfe48ccb", "text": "Graph colouring and its generalizations are useful tools in modelling a wide variety of scheduling and assignment problems. In this paper we review several variants of graph colouring, such as precolouring extension, list colouring, multicolouring, minimum sum colouring, and discuss their applications in scheduling.", "title": "" }, { "docid": "3c118c4f2b418f801faee08050e3a165", "text": "Unsupervised learning from visual data is one of the most difficult challenges in computer vision. It is essential for understanding how visual recognition works. Learning from unsupervised input has an immense practical value, as huge quantities of unlabeled videos can be collected at low cost. Here we address the task of unsupervised learning to detect and segment foreground objects in single images. We achieve our goal by training a student pathway, consisting of a deep neural network that learns to predict, from a single input image, the output of a teacher pathway that performs unsupervised object discovery in video. Our approach is different from the published methods that perform unsupervised discovery in videos or in collections of images at test time. We move the unsupervised discovery phase during the training stage, while at test time we apply the standard feed-forward processing along the student pathway. This has a dual benefit: firstly, it allows, in principle, unlimited generalization possibilities during training, while remaining fast at testing. Secondly, the student not only becomes able to detect in single images significantly better than its unsupervised video discovery teacher, but it also achieves state of the art results on two current benchmarks, YouTube Objects and Object Discovery datasets. At test time, our system is two orders of magnitude faster than other previous methods.", "title": "" }, { "docid": "44cf91a19b11fa62a5859ce236e7dc3f", "text": "We previously reported an ultrasound-guided transversus thoracic abdominis plane (TTP) block, able to target many anterior branches of the intercostal nerve (Th2-6), releasing the pain in the internal mammary area [1–3]. The injection point for this TTP block was located between the transversus thoracic muscle and the internal intercostal muscle, amid the third and fourth left ribs next to the sternum. However, analgesia efficacy in the region of an anterior branch of the sixth intercostal nerve was unstable. We subsequently investigated a more appropriate injection point for an ultrasound-guided TTP block. We selected 10 healthy volunteers for this study. All volunteers received bilateral TTP blocks. Right lateral TTP blocks of all cases involved the injection of 20 mL of 0.375% levobupivacaine into the fascial plane between the transversus thoracic muscle and the internal intercostal muscle at between the third and fourth ribs connecting at the sternum. On the other hand, all left lateral TTP blocks were administered by injection of 20 mL of 0.375% levobupivacaine into the fascial plane between the transversus thoracic muscle and the internal intercostal muscle between the fourth and fifth connecting at the sternum. In 20 minutes after the injections, we investigated the spread of local anesthetic on the TTP by an ultrasound machine (Fig. 1) and the analgesic effect by a sense testing. The sense testing is blindly the cold testing. The spread of local anesthetic is detailed in Table 1. As for the analgesic effect of sense testing, both sides gain sensory extinction in the region of multiple anterior branches of inter-", "title": "" }, { "docid": "4645d0d7b1dfae80657f75d3751ef72a", "text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.", "title": "" }, { "docid": "198d352bf0c044ceccddaeb630b3f9c7", "text": "In this letter, we present an original demonstration of an associative learning neural network inspired by the famous Pavlov's dogs experiment. A single nanoparticle organic memory field effect transistor (NOMFET) is used to implement each synapse. We show how the physical properties of this dynamic memristive device can be used to perform low-power write operations for the learning and implement short-term association using temporal coding and spike-timing-dependent plasticity–based learning. An electronic circuit was built to validate the proposed learning scheme with packaged devices, with good reproducibility despite the complex synaptic-like dynamic of the NOMFET in pulse regime.", "title": "" }, { "docid": "bade68b8f95fc0ae5a377a52c8b04b5c", "text": "The majority of deterministic mathematical programming problems have a compact formulation in terms of algebraic equations. Therefore they can easily take advantage of the facilities offered by algebraic modeling languages. These tools allow expressing models by using convenient mathematical notation (algebraic equations) and translate the models into a form understandable by the solvers for mathematical programs. Algebraic modeling languages provide facility for the management of a mathematical model and its data, and access different general-purpose solvers. The use of algebraic modeling languages (AMLs) simplifies the process of building the prototype model and in some cases makes it possible to create and maintain even the production version of the model. As presented in other chapters of this book, stochastic programming (SP) is needed when exogenous parameters of the mathematical programming problem are random. Dealing with stochasticities in planning is not an easy task. In a standard scenario-by-scenario analysis, the system is optimized for each scenario separately. Varying the scenario hypotheses we can observe the different optimal responses of the system and delineate the “strong trends” of the future. Indeed, this scenarioby-scenario approach implicitly assumes perfect foresight. The method provides a first-stage decision, which is valid only for the scenario under consideration. Having as many decisions as there are scenarios leaves the decision-maker without a clear recommendation. In stochastic programming the whole set of scenarios is combined into an event tree, which describes the unfolding of uncertainties over the period of planning. The model takes into account the uncertainties characterizing the scenarios through stochastic programming techniques. This adaptive plan is much closer, in spirit, to the way that decision-makers have to deal with uncertain future", "title": "" }, { "docid": "5db19f15ec148746613bdb48a4ca746a", "text": "Wireless power transfer (WPT) system is a practical and promising way for charging electric vehicles due to its security, convenience, and reliability. The requirement for high-power wireless charging is on the rise, but implementing such a WPT system has been a challenge because of the constraints of the power semiconductors and the installation space limitation at the bottom of the vehicle. In this paper, bipolar coils and unipolar coils are integrated into the transmitting side and the receiving side to make the magnetic coupler more compact while delivering high power. The same-side coils are naturally decoupled; therefore, there is no magnetic coupling between the same-side coils. The circuit model of the proposed WPT system using double-sided LCC compensations is presented. Finite-element analysis tool ANSYS MAXWELL is adopted to simulate and design the magnetic coupler. Finally, an experimental setup is constructed to evaluate the proposed WPT system. The proposed WPT system achieved the dc–dc efficiency at 94.07% while delivering 4.73 kW to the load with a vertical air gap of 150 mm.", "title": "" }, { "docid": "1acbb63a43218d216a2e850d9b3d3fa1", "text": "In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner.", "title": "" }, { "docid": "6073d07e5e6a05cbaa84ab8cd734bd12", "text": "Microblogging websites, e.g. Twitter and Sina Weibo, have become a popular platform for socializing and sharing information in recent years. Spammers have also discovered this new opportunity to unfairly overpower normal users with unsolicited content, namely social spams. While it is intuitive for everyone to follow legitimate users, recent studies show that both legitimate users and spammers follow spammers for different reasons. Evidence of users seeking for spammers on purpose is also observed. We regard this behavior as a useful information for spammer detection. In this paper, we approach the problem of spammer detection by leveraging the \"carefulness\" of users, which indicates how careful a user is when she is about to follow a potential spammer. We propose a framework to measure the carefulness, and develop a supervised learning algorithm to estimate it based on known spammers and legitimate users. We then illustrate how spammer detection can be improved in the aid of the proposed measure. Evaluation on a real dataset with millions of users and an online testing are performed on Sina Weibo. The results show that our approach indeed capture the carefulness, and it is effective to detect spammers. In addition, we find that the proposed measure is also beneficial for other applications, e.g. link prediction.", "title": "" }, { "docid": "7ea56b976524d77b7234340318f7e8dc", "text": "Market Integration and Market Structure in the European Soft Drinks Industry: Always Coca-Cola? by Catherine Matraves* This paper focuses on the question of European integration, considering whether the geographic level at which competition takes place differs across the two major segments of the soft drinks industry: carbonated soft drinks and mineral water. Our evidence shows firms are competing at the European level in both segments. Interestingly, the European market is being integrated through corporate strategy, defined as increased multinationality, rather than increased trade flows. To interpret these results, this paper uses the new theory of market structure where the essential notion is that in endogenous sunk cost industries such as soft drinks, the traditional inverse structure-size relation may break down, due to the escalation of overhead expenditures.", "title": "" }, { "docid": "129a85f7e611459cf98dc7635b44fc56", "text": "Pain in the oral and craniofacial system represents a major medical and social problem. Indeed, a U.S. Surgeon General’s report on orofacial health concludes that, ‘‘. . .oral health means much more than healthy teeth. It means being free of chronic oral-facial pain conditions. . .’’ [172]. Community-based surveys indicate that many subjects commonly report pain in the orofacial region, with estimates of >39 million, or 22% of Americans older than 18 years of age, in the United States alone [108]. Other population-based surveys conducted in the United Kingdom [111,112], Germany [91], or regional pain care centers in the United States [54] report similar occurrence rates [135]. Importantly, chronic widespread body pain, patient sex and age, and psychosocial factors appear to serve as risk factors for chronic orofacial pain [1,2,92,99,138]. In addition to its high degree of prevalence, the reported intensities of various orofacial pain conditions are similar to that observed with many spinal pain disorders (Fig. 1). Moreover, orofacial pain is derived from many unique target tissues, such as the meninges, cornea, tooth pulp, oral/ nasal mucosa, and temporomandibular joint (Fig. 2), and thus has several unique physiologic characteristics compared with the spinal nociceptive system [23]. Given these considerations, it is not surprising that accurate diagnosis and effective management of orofacial pain conditions represents a significant health care problem. Publications in the field of orofacial pain demonstrate a steady increase over the last several decades (Fig. 3). This is a complex literature; a recent bibliometric analysis of orofacial pain articles published in 2004–2005 indicated that 975 articles on orofacial pain were published in 275 journals from authors representing 54 countries [142]. Thus, orofacial pain disorders represent a complex constellation of conditions with an equally diverse literature base. Accordingly, this review will focus on a summary of major research foci on orofacial pain without attempting to provide a comprehensive review of the entire literature.", "title": "" }, { "docid": "d662e37e868f686a31fda14d4676501a", "text": "Gesture recognition has multiple applications in medical and engineering fields. The problem of hand gesture recognition consists of identifying, at any moment, a given gesture performed by the hand. In this work, we propose a new model for hand gesture recognition in real time. The input of this model is the surface electromyography measured by the commercial sensor the Myo armband placed on the forearm. The output is the label of the gesture executed by the user at any time. The proposed model is based on the Λ-nearest neighbor and dynamic time warping algorithms. This model can learn to recognize any gesture of the hand. To evaluate the performance of our model, we measured and compared its accuracy at recognizing 5 classes of gestures to the accuracy of the proprietary system of the Myo armband. As a result of this evaluation, we determined that our model performs better (86% accurate) than the Myo system (83%).", "title": "" }, { "docid": "9f13ba2860e70e0368584bb4c36d01df", "text": "Network log messages (e.g., syslog) are expected to be valuable and useful information to detect unexpected or anomalous behavior in large scale networks. However, because of the huge amount of system log data collected in daily operation, it is not easy to extract pinpoint system failures or to identify their causes. In this paper, we propose a method for extracting the pinpoint failures and identifying their causes from network syslog data. The methodology proposed in this paper relies on causal inference that reconstructs causality of network events from a set of time series of events. Causal inference can filter out accidentally correlated events, thus it outputs more plausible causal events than traditional cross-correlation-based approaches can. We apply our method to 15 months’ worth of network syslog data obtained from a nationwide academic network in Japan. The proposed method significantly reduces the number of pseudo correlated events compared with the traditional methods. Also, through three case studies and comparison with trouble ticket data, we demonstrate the effectiveness of the proposed method for practical network operation.", "title": "" }, { "docid": "73a5fee293c2ae98e205fd5093cf8b9c", "text": "Millimeter-wave (MMW) imaging techniques have been used for the detection of concealed weapons and contraband carried on personnel at airports and other secure locations. The combination of frequency-modulated continuous-wave (FMCW) technology and MMW imaging techniques should lead to compact, light-weight, and low-cost systems which are especially suitable for security and detection application. However, the long signal duration time leads to the failure of the conventional stop-and-go approximation of the pulsed system. Therefore, the motion within the signal duration time needs to be taken into account. Analytical threedimensional (3-D) backscattered signal model, without using the stop-and-go approximation, is developed in this paper. Then, a wavenumber domain algorithm, with motion compensation, is presented. In addition, conventional wavenumber domain methods use Stolt interpolation to obtain uniform wavenumber samples and compute the fast Fourier transform (FFT). This paper uses the 3D nonuniform fast Fourier transform (NUFFT) instead of the Stolt interpolation and FFT. The NUFFT-based method is much faster than the Stolt interpolation-based method. Finally, point target simulations are performed to verify the algorithm.", "title": "" }, { "docid": "ebaeacf1c0eeb4a4818b4ac050e60b0c", "text": "Open information extraction (Open IE) systems aim to obtain relation tuples with highly scalable extraction in portable across domain by identifying a variety of relation phrases and their arguments in arbitrary sentences. The first generation of Open IE learns linear chain models based on unlexicalized features such as Part-of-Speech (POS) or shallow tags to label the intermediate words between pair of potential arguments for identifying extractable relations. Open IE currently is developed in the second generation that is able to extract instances of the most frequently observed relation types such as Verb, Noun and Prep, Verb and Prep, and Infinitive with deep linguistic analysis. They expose simple yet principled ways in which verbs express relationships in linguistics such as verb phrase-based extraction or clause-based extraction. They obtain a significantly higher performance over previous systems in the first generation. In this paper, we describe an overview of two Open IE generations including strengths, weaknesses and application areas.", "title": "" }, { "docid": "d4d48e7275191ab29f805ca86e626c04", "text": "This paper addresses the problem of keyword extraction from conversations, with the goal of using these keywords to retrieve, for each short conversation fragment, a small number of potentially relevant documents, which can be recommended to participants. However, even a short fragment contains a variety of words, which are potentially related to several topics; moreover, using an automatic speech recognition (ASR) system introduces errors among them. Therefore, it is difficult to infer precisely the information needs of the conversation participants. We first propose an algorithm to extract keywords from the output of an ASR system (or a manual transcript for testing), which makes use of topic modeling techniques and of a submodular reward function which favors diversity in the keyword set, to match the potential diversity of topics and reduce ASR noise. Then, we propose a method to derive multiple topically separated queries from this keyword set, in order to maximize the chances of making at least one relevant recommendation when using these queries to search over the English Wikipedia. The proposed methods are evaluated in terms of relevance with respect to conversation fragments from the Fisher, AMI, and ELEA conversational corpora, rated by several human judges. The scores show that our proposal improves over previous methods that consider only word frequency or topic similarity, and represents a promising solution for a document recommender system to be used in conversations.", "title": "" }, { "docid": "a9ff593d6eea9f28aa1d2b41efddea9b", "text": "A central task in the study of evolution is the reconstruction of a phylogenetic tree from sequences of current-day taxa. A well supported approach to tree reconstruction performs maximum likelihood (ML) analysis. Unfortunately, searching for the maximum likelihood phylogenetic tree is computationally expensive. In this paper, we describe a new algorithm that uses Structural-EM for learning maximum likelihood trees. This algorithm is similar to the standard EM method for estimating branch lengths, except that during iterations of this algorithms the topology is improved as well as the branch length. The algorithm performs iterations of two steps. In the E-Step, we use the current tree topology and branch lengths to compute expected sufficient statistics, which summarize the data. In the M-Step, we search for a topology that maximizes the likelihood with respect to these expected sufficient statistics. As we show, searching for better topologies inside the M-step can be done efficiently, as opposed to standard search over topologies. We prove that each iteration of this procedure increases the likelihood of the topology, and thus the procedure must converge. We evaluate our new algorithm on both synthetic and real sequence data, and show that it is both dramatically faster and finds more plausible trees than standard search for maximum likelihood phylogenies.", "title": "" } ]
scidocsrr
eeb25d53134c4cc77a78e8cb6d6fabbe
An Intelligent Secure and Privacy-Preserving Parking Scheme Through Vehicular Communications
[ { "docid": "fd61461d5033bca2fd5a2be9bfc917b7", "text": "Vehicular networks are very likely to be deployed in the coming years and thus become the most relevant form of mobile ad hoc networks. In this paper, we address the security of these networks. We provide a detailed threat analysis and devise an appropriate security architecture. We also describe some major design decisions still to be made, which in some cases have more than mere technical implications. We provide a set of security protocols, we show that they protect privacy and we analyze their robustness and efficiency.", "title": "" } ]
[ { "docid": "90b248a3b141fc55eb2e55d274794953", "text": "The aerodynamic admittance function (AAF) has been widely invoked to relate wind pressures on building surfaces to the oncoming wind velocity. In current practice, strip and quasi-steady theories are generally employed in formulating wind effects in the along-wind direction. These theories permit the representation of the wind pressures on building surfaces in terms of the oncoming wind velocity field. Synthesis of the wind velocity field leads to a generalized wind load that employs the AAF. This paper reviews the development of the current AAF in use. It is followed by a new definition of the AAF, which is based on the base bending moment. It is shown that the new AAF is numerically equivalent to the currently used AAF for buildings with linear mode shape and it can be derived experimentally via high frequency base balance. New AAFs for square and rectangular building models were obtained and compared with theoretically derived expressions. Some discrepancies between experimentally and theoretically derived AAFs in the high frequency range were noted.", "title": "" }, { "docid": "97c0dc54f51ebcfe041f18028a15c621", "text": "Mobile learning or “m-learning” is the process of learning when learners are not at a fixed location or time and can exploit the advantage of learning opportunities using mobile technologies. Nowadays, speech recognition is being used in many mobile applications.!Speech recognition helps people to interact with the device as if were they talking to another person. This technology helps people to learn anything using computers by promoting self-study over extended periods of time. The objective of this study focuses on designing and developing a mobile application for the Arabic recognition of spoken Quranic verses. The application is suitable for Android-based devices. The application is called Say Quran and is available on Google Play Store. Moreover, this paper presents the results of a preliminary study to gather feedback from students regarding the developed application.", "title": "" }, { "docid": "1b9f54b275252818f730858654dc4348", "text": "We will demonstrate a conversational products recommendation agent. This system shows how we combine research in personalized recommendation systems with research in dialogue systems to build a virtual sales agent. Based on new deep learning technologies we developed, the virtual agent is capable of learning how to interact with users, how to answer user questions, what is the next question to ask, and what to recommend when chatting with a human user. Normally a descent conversational agent for a particular domain requires tens of thousands of hand labeled conversational data or hand written rules. This is a major barrier when launching a conversation agent for a new domain. We will explore and demonstrate the effectiveness of the learning solution even when there is no hand written rules or hand labeled training data.", "title": "" }, { "docid": "fdc1beef8614e0c85e784597532a1ce4", "text": "This article presents the hardware design and software algorithms of RoboSimian, a statically stable quadrupedal robot capable of both dexterous manipulation and versatile mobility in difficult terrain. The robot has generalized limbs and hands capable of mobility and manipulation, along with almost fully hemispherical 3D sensing with passive stereo cameras. The system is semi-autonomous, enabling low-bandwidth, high latency control operated from a standard laptop. Because limbs are used for mobility and manipulation, a single unified mobile manipulation planner is used to generate autonomous behaviors, including walking, sitting, climbing, grasping, and manipulating. The remote operator interface is optimized to designate, parameterize, sequence, and preview behaviors, which are then executed by the robot. RoboSimian placed fifth in the DARPA Robotics Challenge (DRC) Trials, demonstrating its ability to perform disaster recovery tasks in degraded human environments.", "title": "" }, { "docid": "6300f94dbfa58583e15741e5c86aa372", "text": "In this paper, we study the problem of retrieving a ranked list of top-N items to a target user in recommender systems. We first develop a novel preference model by distinguishing different rating patterns of users, and then apply it to existing collaborative filtering (CF) algorithms. Our preference model, which is inspired by a voting method, is well-suited for representing qualitative user preferences. In particular, it can be easily implemented with less than 100 lines of codes on top of existing CF algorithms such as user-based, item-based, and matrix-factorizationbased algorithms. When our preference model is combined to three kinds of CF algorithms, experimental results demonstrate that the preference model can improve the accuracy of all existing CF algorithms such as ATOP and NDCG@25 by 3%–24% and 6%–98%, respectively.", "title": "" }, { "docid": "eb29f281b0237bea84ae26829f5545bd", "text": "Using formal concept analysis, we propose a method for engineering ontology from MongoDB to effectively represent unstructured data. Our method consists of three main phases: (1) generating formal context from a MongoDB, (2) applying formal concept analysis to derive a concept lattice from that formal context, and (3) converting the obtained concept lattice to the first prototype of an ontology. We apply our method on NorthWind database and demonstrate how the proposed mapping rules can be used for learning an ontology from such database. At the end, we discuss about suggestions by which we can improve and generalize the method for more complex database examples.", "title": "" }, { "docid": "51f2ba8b460be1c9902fb265b2632232", "text": "Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.", "title": "" }, { "docid": "a8ddaed8209d09998159014307233874", "text": "Traditional image-based 3D reconstruction methods use multiple images to extract 3D geometry. However, it is not always possible to obtain such images, for example when reconstructing destroyed structures using existing photographs or paintings with proper perspective (figure 1), and reconstructing objects without actually visiting the site using images from the web or postcards (figure 2). Even when multiple images are possible, parts of the scene appear in only one image due to occlusions and/or lack of features to match between images. Methods for 3D reconstruction from a single image do exist (e.g. [1] and [2]). We present a new method that is more accurate and more flexible so that it can model a wider variety of sites and structures than existing methods. Using this approach, we reconstructed in 3D many destroyed structures using old photographs and paintings. Sites all over the world have been reconstructed from tourist pictures, web pages, and postcards.", "title": "" }, { "docid": "c6bfdc5c039de4e25bb5a72ec2350223", "text": "Free-energy-based reinforcement learning (FERL) can handle Markov decision processes (MDPs) with high-dimensional state spaces by approximating the state-action value function with the negative equilibrium free energy of a restricted Boltzmann machine (RBM). In this study, we extend the FERL framework to handle partially observable MDPs (POMDPs) by incorporating a recurrent neural network that learns a memory representation sufficient for predicting future observations and rewards. We demonstrate that the proposed method successfully solves POMDPs with high-dimensional observations without any prior knowledge of the environmental hidden states and dynamics. After learning, task structures are implicitly represented in the distributed activation patterns of hidden nodes of the RBM.", "title": "" }, { "docid": "5e6175d56150485d559d0c1a963e12b8", "text": "High-resolution depth map can be inferred from a lowresolution one with the guidance of an additional highresolution texture map of the same scene. Recently, deep neural networks with large receptive fields are shown to benefit applications such as image completion. Our insight is that super resolution is similar to image completion, where only parts of the depth values are precisely known. In this paper, we present a joint convolutional neural pyramid model with large receptive fields for joint depth map super-resolution. Our model consists of three sub-networks, two convolutional neural pyramids concatenated by a normal convolutional neural network. The convolutional neural pyramids extract information from large receptive fields of the depth map and guidance map, while the convolutional neural network effectively transfers useful structures of the guidance image to the depth image. Experimental results show that our model outperforms existing state-of-the-art algorithms not only on data pairs of RGB/depth images, but also on other data pairs like color/saliency and color-scribbles/colorized images.", "title": "" }, { "docid": "e70425a0b9d14ff4223f3553de52c046", "text": "CUDA is a new general-purpose C language interface to GPU developed by NVIDIA. It makes full use of parallel of GPU and has been widely used now. 3D model reconstruction is a traditional and common technique which has been widely used in engineering experiments, CAD and computer graphics. In this paper, we present an algorithm of CUDA-based Poisson surface reconstruction. Our algorithm makes full use of parallel of GPU and runs entirely on GPU and is ten times faster than previous CPU algorithm.", "title": "" }, { "docid": "d05c6ec4bfb24f283e7f8baa08985e70", "text": "This paper describes a recently developed architecture for a Hardware-in-the-Loop simulator for Unmanned Aerial Vehicles. The principal idea is to use the advanced modeling capabilities of Simulink rather than hard-coded software as the flight dynamics simulating engine. By harnessing Simulink’s ability to precisely model virtually any dynamical system or phenomena this newly developed simulator facilitates the development, validation and verification steps of flight control algorithms. Although the presented architecture is used in conjunction with a particular commercial autopilot, the same approach can be easily implemented on a flight platform with a different autopilot. The paper shows the implementation of the flight modeling simulation component in Simulink supported with an interfacing software to a commercial autopilot. This offers the academic community numerous advantages for hardware-in-the-loop simulation of flight dynamics and control tasks. The developed setup has been rigorously tested under a wide variety of conditions. Results from hardware-in-the-loop and real flight tests are presented and compared to validate its adequacy and assess its usefulness as a rapid prototyping tool.", "title": "" }, { "docid": "ce99ce3fb3860e140164e7971291f0fa", "text": "We describe the development and psychometric characteristics of the Generalized Workplace Harassment Questionnaire (GWHQ), a 29-item instrument developed to assess harassing experiences at work in five conceptual domains: verbal aggression, disrespect, isolation/exclusion, threats/bribes, and physical aggression. Over 1700 current and former university employees completed the GWHQ at three time points. Factor analytic results at each wave of data suggested a five-factor solution that did not correspond to the original five conceptual factors. We suggest a revised scoring scheme for the GWHQ utilizing four of the empirically extracted factors: covert hostility, verbal hostility, manipulation, and physical hostility. Covert hostility was the most frequently experienced type of harassment, followed by verbal hostility, manipulation, and physical hostility. Verbal hostility, covert hostility, and manipulation were found to be significant predictors of psychological distress.", "title": "" }, { "docid": "5f806baa9987146a642fbce106f43291", "text": "Biofouling is generally undesirable for many applications. An overview of the medical, marine and industrial fields susceptible to fouling is presented. Two types of fouling include biofouling from organism colonization and inorganic fouling from non-living particles. Nature offers many solutions to control fouling through various physical and chemical control mechanisms. Examples include low drag, low adhesion, wettability (water repellency and attraction), microtexture, grooming, sloughing, various miscellaneous behaviours and chemical secretions. A survey of nature's flora and fauna was taken in order to discover new antifouling methods that could be mimicked for engineering applications. Antifouling methods currently employed, ranging from coatings to cleaning techniques, are described. New antifouling methods will presumably incorporate a combination of physical and chemical controls.", "title": "" }, { "docid": "337a738d386fa66725fe9be620365d5f", "text": "Change in a software is crucial to incorporate defect correction and continuous evolution of requirements and technology. Thus, development of quality models to predict the change proneness attribute of a software is important to effectively utilize and plan the finite resources during maintenance and testing phase of a software. In the current scenario, a variety of techniques like the statistical techniques, the Machine Learning (ML) techniques and the Search-based techniques (SBT) are available to develop models to predict software quality attributes. In this work, we assess the performance of ten machine learning and search-based techniques using data collected from three open source software. We first develop a change prediction model using one data set and then we perform inter-project validation using two other data sets in order to obtain unbiased and generalized results. The results of the study indicate comparable performance of SBT with other employed statistical and ML techniques. This study also supports inter project validation as we successfully applied the model created using the training data of one project on other similar projects and yield good results.", "title": "" }, { "docid": "c41c38377b1a824e1d021794802c7aed", "text": "This paper presents an optimization methodology that includes three important components necessary for a systematic approach to naval ship concept design. These are: • An efficient and effective search of design space for non-dominated designs • Well-defined and quantitative measures of objective attributes • An effective format to describe the design space and to present non-dominated concepts for rational selection by the customer A Multiple-Objective Genetic Optimization (MOGO) is used to search design parameter space and identify non-dominated design concepts based on life cycle cost and mission effectiveness. A nondominated frontier and selected generations of feasible designs are used to present results to the customer for selection of preferred alternatives. A naval ship design application is presented.", "title": "" }, { "docid": "261f146b67fd8e13d1ad8c9f6f5a8845", "text": "Vision based automatic lane tracking system requires information such as lane markings, road curvature and leading vehicle be detected before capturing the next image frame. Placing a camera on the vehicle dashboard and capturing the forward view results in a perspective view of the road image. The perspective view of the captured image somehow distorts the actual shape of the road, which involves the width, height, and depth. Respectively, these parameters represent the x, y and z components. As such, the image needs to go through a pre-processing stage to remedy the distortion using a transformation technique known as an inverse perspective mapping (IPM). This paper outlines the procedures involved.", "title": "" }, { "docid": "3c95e090ab4e57f2fd21543226ad55ae", "text": "Increase in the area and neuron number of the cerebral cortex over evolutionary time systematically changes its computational properties. One of the fundamental developmental mechanisms generating the cortex is a conserved rostrocaudal gradient in duration of neuron production, coupled with distinct asymmetries in the patterns of axon extension and synaptogenesis on the same axis. A small set of conserved sensorimotor areas with well-defined thalamic input anchors the rostrocaudal axis. These core mechanisms organize the cortex into two contrasting topographic zones, while systematically amplifying hierarchical organization on the rostrocaudal axis in larger brains. Recent work has shown that variation in 'cognitive control' in multiple species correlates best with absolute brain size, and this may be the behavioral outcome of this progressive organizational change.", "title": "" }, { "docid": "172aaf47ee3f89818abba35a463ecc76", "text": "I examined the relationship of recalled and diary recorded frequency of penile-vaginal intercourse (FSI), noncoital partnered sexual activity, and masturbation to measured waist and hip circumference in 120 healthy adults aged 19-38. Slimmer waist (in men and in the sexes combined) and slimmer hips (in men and women) were associated with greater FSI. Slimmer waist and hips were associated with rated importance of intercourse for men. Noncoital partnered sexual activity had a less consistent association with slimness. Slimmer waist and hips were associated with less masturbation (in men and in the sexes combined). I discuss the results in terms of differences between different sexual behaviors, attractiveness, emotional relatedness, physical sensitivity, sexual dysfunction, sociobiology, psychopharmacological aspects of excess fat and carbohydrate consumption, and implications for sex therapy.", "title": "" } ]
scidocsrr
59de3a7c8e565c21168b30bff7116da1
Scene Segmentation with Conditional Random Fields Learned from Partially Labeled Images
[ { "docid": "350c899dbd0d9ded745b70b6f5e97d19", "text": "We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.", "title": "" } ]
[ { "docid": "9b2e025c6bb8461ddb076301003df0e4", "text": "People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how information from different modality sources relate to each other. This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinionlevel Sentiment Intensity dataset (MOSI). The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. Furthermore, we present baselines for future studies in this direction as well as a new multimodal fusion approach that jointly models spoken words and visual gestures.", "title": "" }, { "docid": "0ef173f7c32074bfebeab524354de1ec", "text": "Text classification is an important problem with many applications. Traditional approaches represent text as a bagof-words and build classifiers based on this representation. Rather than words, entity phrases, the relations between the entities, as well as the types of the entities and relations carry much more information to represent the texts. This paper presents a novel text as network classification framework, which introduces 1) a structured and typed heterogeneous information networks (HINs) representation of texts, and 2) a meta-path based approach to link texts. We show that with the new representation and links of texts, the structured and typed information of entities and relations can be incorporated into kernels. Particularly, we develop both simple linear kernel and indefinite kernel based on metapaths in the HIN representation of texts, where we call them HIN-kernels. Using Freebase, a well-known world knowledge base, to construct HIN for texts, our experiments on two benchmark datasets show that the indefinite HIN-kernel based on weighted meta-paths outperforms the state-of-theart methods and other HIN-kernels.", "title": "" }, { "docid": "c3cc032538a10ab2f58ff45acb6d16d0", "text": "How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact). Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF). We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.", "title": "" }, { "docid": "8df74743ab51e92c43ecf272485470c6", "text": "We propose a new detection method to predict a vehicle's trajectory and use it for detecting lane changes of surrounding vehicles. According to the previous research, more than 90% of the car crashes are caused by human errors, and lane changes are the main factor. Therefore, if a lane change can be detected before a vehicle crosses the centerline, accident rates will decrease. Previously reported detection methods have the problem of frequent false alarms caused by zigzag driving that can result in user distrust in driving safety support systems. Most cases of zigzag driving are caused by the abortion of a lane change due to the presence of adjacent vehicles on the next lane. Our approach reduces false alarms by considering the possibility of a crash with adjacent vehicles by applying trajectory prediction when the target vehicle attempts to change a lane, and it reflects the result of lane-change detection. We used a traffic dataset with more than 500 lane changes and confirmed that the proposed method can considerably improve the detection performance.", "title": "" }, { "docid": "fa60a5fcc0ed7abaa981db19e9f4c228", "text": "This paper provides an overview of the panel VAR models used in macroeconomics and …nance. It discusses what are their distinctive features, what they are used for, and how they can be derived from economic theory. It also describes how they are estimated and how shock identi…cation is performed, and compares panel VARs to other approaches used in the literature to deal with dynamic models involving heterogeneous units. Finally, it shows how structural time variation can be dealt with and illustrates the challanges that they present to researchers interested in studying cross-unit dynamics interdependences in heterogeneous setups. JEL classi…cation: C11, C30, C53. Key words: Panel VAR, Estimation, Identi…cation, Inference.", "title": "" }, { "docid": "5409b6586b89bd3f0b21e7984383e1e1", "text": "The dream of creating artificial devices that reach or outperform human intelligence is many centuries old. In this talk I present an elegant parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. The theory reduces all conceptual AI problems to pure computational questions. The necessary and sufficient ingredients are Bayesian probability theory; algorithmic information theory; universal Turing machines; the agent framework; sequential decision theory; and reinforcement learning, which are all important subjects in their own right. I also present some recent approximations, implementations, and applications of this modern top-down approach to AI. Marcus Hutter 3 Universal Artificial Intelligence Overview Goal: Construct a single universal agent that learns to act optimally in any environment. State of the art: Formal (mathematical, non-comp.) definition of such an agent. Accomplishment: Well-defines AI. Formalizes rational intelligence. Formal “solution” of the AI problem in the sense of ... =⇒ Reduces the conceptional AI problem to a (pure) computational problem. Evidence: Mathematical optimality proofs and some experimental results. Marcus Hutter 4 Universal Artificial Intelligence", "title": "" }, { "docid": "2692d3fa7d60449b85df294c74c721ec", "text": "This paper presents an approach to detect real-world events as manifested in news texts. We use vector space models, particularly neural embeddings (prediction-based distributional models). The models are trained on a large ‘reference’ corpus and then successively updated with new textual data from daily news. For given words or multi-word entities, calculating difference between their vector representations in two or more models allows to find out association shifts that happen to these words over time. The hypothesis is tested on country names, using news corpora for English and Russian language. We show that this approach successfully extracts meaningful temporal trends for named entities regardless of a language.", "title": "" }, { "docid": "c0d8f6f343f2602d9a32ba228f51f315", "text": "Classification is a data mining (machine learning) technique used to predict group membership for data instances. In this paper, we present the basic classification techniques. Several major kinds of classification method including decision tree induction, Bayesian networks, k-nearest neighbor classifier, case-based reasoning, genetic algorithm and fuzzy logic techniques. The goal of this survey is to provide a comprehensive review of different classification techniques in data mining.", "title": "" }, { "docid": "bf0d945fb34f704c8fee46cc5868b751", "text": "Significant variation of the resource kinetic energy, in the form of wind speed, results in substantially reduced energy capture in a fixed speed wind turbine. In order to increase the wind energy capture in the turbine, variable speed generation (VSG) strategies have been proposed and implemented. However, that requires an expensive AC/AC power converter which increases the capital investment significantly. Consequently doubly-fed systems have been proposed to reduce the size of the power converter and thereby the associated cost. Additionally, in doubly-fed systems, at a fixed operating point (power and speed), power flow can be regulated between the two winding systems on the machine. This feature can be utilized to essentially minimize losses in the machine associated with the given operating point or achieve other desired performance enhancements. In this paper, a brushless doubly-fed machine (BDFM) is utilized to develop a VSG wind power generator. The VSG controller employs a wind speed estimation based maximum power point tracker (MPPT) and a heuristic model based maximum efficiency point tracker (MEPT) to optimize the power output of the system. The controller has been verified for efficacy on a 1.5 kW laboratory VSG wind generator. The strategy is applicable to all doubly-fed configurations, including conventional wound rotor induction machines, Scherbius cascades, brushless doubly fed machines, and doubly-fed reluctance machines.", "title": "" }, { "docid": "b555efbace3e21ff955acd5b2408f648", "text": "Aviation security screening has become very important in recent years. It was shown by Schwaninger et al. (2004) that certain image-based factors influence detection when visually inspecting X-ray images of passenger bags. Threat items are more difficult to recognize when placed in close-packed bags (effect of bag complexity), when superimposed by other objects (effect of superposition), and when rotated (effect of viewpoint). The X-ray object recognition rest (X-ray ORT) was developed to measure the abilities needed to cope with these factors. In this study, we examined the reliability and validity of the X-ray ORT based on a sample of 453 aviation security screeners and 453 novices. Cronbach Alpha and split-half analysis revealed high reliability. Validity was examined using internal, convergent, discriminant and criterion-related validity estimates. The results show that the X-ray ORT is a reliable and valid instrument for measuring visual abilities needed in X-ray screening. This makes the X-ray ORT an interesting tool for competency and pre-employment assessment purposes.", "title": "" }, { "docid": "815215b56160ab38745fded16edd31d6", "text": "Object detection in videos has drawn increasing attention recently with the introduction of the large-scale ImageNet VID dataset. Different from object detection in static images, temporal information in videos is vital for object detection. To fully utilize temporal information, state-of-the-art methods [15, 14] are based on spatiotemporal tubelets, which are essentially sequences of associated bounding boxes across time. However, the existing methods have major limitations in generating tubelets in terms of quality and efficiency. Motion-based [14] methods are able to obtain dense tubelets efficiently, but the lengths are generally only several frames, which is not optimal for incorporating long-term temporal information. Appearance-based [15] methods, usually involving generic object tracking, could generate long tubelets, but are usually computationally expensive. In this work, we propose a framework for object detection in videos, which consists of a novel tubelet proposal network to efficiently generate spatiotemporal proposals, and a Long Short-term Memory (LSTM) network that incorporates temporal information from tubelet proposals for achieving high object detection accuracy in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of the proposed framework for object detection in videos.", "title": "" }, { "docid": "da3634b5a14829b22546389e56425017", "text": "Homomorphic encryption (HE)—the ability to perform computations on encrypted data—is an attractive remedy to increasing concerns about data privacy in the field of machine learning. However, building models that operate on ciphertext is currently labor-intensive and requires simultaneous expertise in deep learning, cryptography, and software engineering. Deep learning frameworks, together with recent advances in graph compilers, have greatly accelerated the training and deployment of deep learning models to a variety of computing platforms. Here, we introduce nGraph-HE, an extension of the nGraph deep learning compiler, which allows data scientists to deploy trained models with popular frameworks like TensorFlow, MXNet and PyTorch directly, while simply treating HE as another hardware target. This combination of frameworks and graph compilers greatly simplifies the development of privacy-preserving machine learning systems, provides a clean abstraction barrier between deep learning and HE, allows HE libraries to exploit HE-specific graph optimizations, and comes at a low cost in runtime overhead versus native HE operations.", "title": "" }, { "docid": "201db4c8aaa43d766ec707d7fff5fd65", "text": "Sentiment Analysis is a way of considering and grouping of opinions or views expressed in a text. In this age when social media technologies are generating vast amounts of data in the form of tweets, Facebook comments, blog posts, and Instagram comments, sentiment analysis of these usergenerated data provides very useful feedback. Since it is undisputable facts that twitter sentiment analysis has become an effective way in determining public sentiment about a certain topic product or issue. Thus, a lot of research have been ongoing in recent years to build efficient models for sentiment classification accuracy and precision. In this work, we analyse twitter data using support vector machine algorithm to classify tweets into positive, negative and neutral sentiments. This research try to find the relationship between feature hash bit size and the accuracy and precision of the model that is generated. We measure the effect of varying the feature has bit size on the accuracy and precision of the model. The research showed that as the feature hash bit size increases at a certain point the accuracy and precision value started decreasing with increase in the feature hash bit size. General Terms Hadoop, Data Processing, Machine learning", "title": "" }, { "docid": "97f2f0dd427c5f18dae178bc2fd620ba", "text": "NOTICE The contents of this report reflect the views of the author, who is responsible for the facts and accuracy of the data presented herein. The contents do not necessarily reflect policy of the Department of Transportation. This report does not constitute a standard, specification, or regulation. Abstract This report summarizes the historical development of the resistance factors developed for the geotechnical foundation design sections of the AASHTO LRFD Bridge Design Specifications, and recommends how to specifically implement recent developments in resistance factors for geotechnical foundation design. In addition, recommendations regarding the load factor for downdrag loads, based on statistical analysis of available load test data and reliability theory, are provided. The scope of this report is limited to shallow and deep foundation geotechnical design at the strength limit state. 17. Forward With the advent of the AASHTO Load and Resistance Factor (LRFD) Bridge Design Specifications in 1992, there has been considerable focus on the geotechnical aspects of those specifications, since most geotechnical engineers are unfamiliar with LRFD concepts. This is especially true regarding the quantification of the level of safety needed for design. Up to the time of writing of this report, the geotechnical profession has typically used safety factors within an allowable stress design (ASD) framework (also termed working stress design, or WSD). For those agencies that use Load Factor Design (LFD), the safety factors for the foundation design are used in combination with factored loads in accordance with the AASHTO Standard Specifications for Highway Bridges (2002). The adaptation of geotechnical design and the associated safety factors to what would become the first edition of the AASHTO LRFD Bridge Design Specifications began in earnest with the publication of the results of NCHRP Project 24-4 as NCHRP Report 343 (Barker, et al., 1991). The details of the calibrations they conducted are provided in an unpublished Appendix to that report (Appendix A). This is the primary source of resistance factors for foundation design as currently published in AASHTO (2004). Since that report was published, changes have occurred in the specifications regarding load factors and design methodology that have required re-evaluation of the resistance factors. Furthermore, new studies have been or are being conducted that are yet to be implemented in the LRFD specifications. In 2002, the AASHTO Bridge Subcommittee initiated an effort, with the help of the Federal Highway Administration (FHWA), to rewrite the foundation design sections of the AASHTO …", "title": "" }, { "docid": "4b717fc5c3ef0096a3f2829dd10b3bd6", "text": "The problem of learning to distinguish good inputs from malicious has come to be known as adversarial classification emphasizing the fact that, unlike traditional classification, the adversary can manipulate input instances to avoid being so classified. We offer the first general theoretical analysis of the problem of adversarial classification, resolving several important open questions in the process. First, we significantly generalize previous results on adversarial classifier reverse engineering (ACRE), showing that if a classifier can be efficiently learned, it can subsequently be efficiently reverse engineered with arbitrary precision. We extend this result to randomized classification schemes, but now observe that reverse engineering is imperfect, and its efficacy depends on the defender’s randomization scheme. Armed with this insight, we proceed to characterize optimal randomization schemes in the face of adversarial reverse engineering and classifier manipulation. What we find is quite surprising: in all the model variations we consider, the defender’s optimal policy tends to be either to randomize uniformly (ignoring baseline classification accuracy), which is the case for targeted attacks, or not to randomize at all, which is typically optimal when attacks are indiscriminate.", "title": "" }, { "docid": "c4a272588dfd9e636d72fce09501ad8d", "text": "Semantic segmentation is the task of assigning a label to each pixel in the image.In recent years, deep convolutional neural networks have been driving advances in multiple tasks related to cognition. Although, DCNNs have resulted in unprecedented visual recognition performances, they offer little transparency. To understand how DCNN based models work at the task of semantic segmentation, we try to analyze the DCNN models in semantic segmentation. We try to find the importance of global image information for labeling pixels. Based on the experiments on discriminative regions, and modeling of fixations, we propose a set of new training loss functions for fine-tuning DCNN based models. The proposed training regime has shown improvement in performance of DeepLab Large FOV(VGG-16) Segmentation model for PASCAL VOC 2012 dataset. However, further test remains to conclusively evaluate the benefits due to the proposed loss functions across models, and data-sets.", "title": "" }, { "docid": "8beca44b655835e7a33abd8f1f343a6f", "text": "Taxonomies have been developed as a mechanism for cyber attack categorisation. However, when one considers the recent and rapid evolution of attacker techniques and targets, the applicability and effectiveness of these taxonomies should be questioned. This paper applies two approaches to the evaluation of seven taxonomies. The first employs a criteria set, derived through analysis of existing works in which critical components to the creation of taxonomies are defined. The second applies historical attack data to each taxonomy under review, more specifically, attacks in which industrial control systems have been targeted. This combined approach allows for a more in-depth understanding of existing taxonomies to be developed, from both a theoretical and practical perspective.", "title": "" }, { "docid": "1ce2a5e4aafed56039597524f59e2bcc", "text": "Statistical mediation methods provide valuable information about underlying mediating psychological processes, but the ability to infer that the mediator variable causes the outcome variable is more complex than widely known. Researchers have recently emphasized how violating assumptions about confounder bias severely limits causal inference of the mediator to dependent variable relation. Our article describes and addresses these limitations by drawing on new statistical developments in causal mediation analysis. We first review the assumptions underlying causal inference and discuss three ways to examine the effects of confounder bias when assumptions are violated. We then describe four approaches to address the influence of confounding variables and enhance causal inference, including comprehensive structural equation models, instrumental variable methods, principal stratification, and inverse probability weighting. Our goal is to further the adoption of statistical methods to enhance causal inference in mediation studies.", "title": "" }, { "docid": "1b1953e3dd28c67e7a8648392422df88", "text": "We examined Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) General Ability Index (GAI) and Full Scale Intelligence Quotient (FSIQ) discrepancies in 100 epilepsy patients; 44% had a significant GAI > FSIQ discrepancy. GAI-FSIQ discrepancies were correlated with the number of antiepileptic drugs taken and duration of epilepsy. Individual antiepileptic drugs differentially interfere with the expression of underlying intellectual ability in this group. FSIQ may significantly underestimate levels of general intellectual ability in people with epilepsy. Inaccurate representations of FSIQ due to selective impairments in working memory and reduced processing speed obscure the contextual interpretation of performance on other neuropsychological tests, and subtle localizing and lateralizing signs may be missed as a result.", "title": "" }, { "docid": "f5fdc2aac2caa3f8ac4648ebe599d707", "text": "This paper describes a Genetic Algorithms approach to a manpower-scheduling problem arising at a major UK hospital. Although Genetic Algorithms have been successfully used for similar problems in the past, they always had to overcome the limitations of the classical Genetic Algorithms paradigm in handling the conflict between objectives and constraints. The approach taken here is to use an indirect coding based on permutations of the nurses, and a heuristic decoder that builds schedules from these permutations. Computational experiments based on 52 weeks of live data are used to evaluate three different decoders with varying levels of intelligence, and four well-known crossover operators. Results are further enhanced by introducing a hybrid crossover operator and by making use of simple bounds to reduce the size of the solution space. The results reveal that the proposed algorithm is able to find high quality solutions and is both faster and more flexible than a recently published Tabu Search approach.", "title": "" } ]
scidocsrr
7c023d886d56a6eec9c34bd3f0f3e4f5
NVMalloc: Exposing an Aggregate SSD Store as a Memory Partition in Extreme-Scale Machines
[ { "docid": "5aa8167b3aaf4d0b0a4753ad64354366", "text": "New storage-class memory (SCM) technologies, such as phase-change memory, STT-RAM, and memristors, promise user-level access to non-volatile storage through regular memory instructions. These memory devices enable fast user-mode access to persistence, allowing regular in-memory data structures to survive system crashes.\n In this paper, we present Mnemosyne, a simple interface for programming with persistent memory. Mnemosyne addresses two challenges: how to create and manage such memory, and how to ensure consistency in the presence of failures. Without additional mechanisms, a system failure may leave data structures in SCM in an invalid state, crashing the program the next time it starts.\n In Mnemosyne, programmers declare global persistent data with the keyword \"pstatic\" or allocate it dynamically. Mnemosyne provides primitives for directly modifying persistent variables and supports consistent updates through a lightweight transaction mechanism. Compared to past work on disk-based persistent memory, Mnemosyne reduces latency to storage by writing data directly to memory at the granularity of an update rather than writing memory pages back to disk through the file system. In tests emulating the performance characteristics of forthcoming SCMs, we show that Mnemosyne can persist data as fast as 3 microseconds. Furthermore, it provides a 35 percent performance increase when applied in the OpenLDAP directory server. In microbenchmark studies we find that Mnemosyne can be up to 1400% faster than alternative persistence strategies, such as Berkeley DB or Boost serialization, that are designed for disks.", "title": "" } ]
[ { "docid": "de94c8531839326cc549b97855f8348a", "text": "In this paper, we investigate the prediction of daily stock prices of the top five companies in the Thai SET50 index. A Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) is applied to forecast the next daily stock price (High, Low, Open, Close). Deep Belief Network (DBN) is applied to compare the result with LSTM. The test data are CPALL, SCB, SCC, KBANK, and PTT from the SET50 index. The purpose of selecting these five stocks is to compare how the model performs in different stocks with various volatility. There are two experiments of five stocks from the SET50 index. The first experiment compared the MAPE with different length of training data. The experiment is conducted by using training data for one, three, and five-year. PTT and SCC stock give the lowest median value of MAPE error for five-year training data. KBANK, SCB, and CPALL stock give the lowest median value of MAPE error for one-year training data. In the second experiment, the number of looks back and input are varied. The result with one look back and four inputs gives the best performance for stock price prediction. By comparing different technique, the result show that LSTM give the best performance with CPALL, SCB, and KTB with less than 2% error. DBN give the best performance with PTT and SCC with less than 2% error.", "title": "" }, { "docid": "ea31a93d54e45eede5ba3e6263e8a13e", "text": "Clustering methods for data-mining problems must be extremely scalable. In addition, several data mining applications demand that the clusters obtained be balanced, i.e., of approximately the same size or importance. In this paper, we propose a general framework for scalable, balanced clustering. The data clustering process is broken down into three steps: sampling of a small representative subset of the points, clustering of the sampled data, and populating the initial clusters with the remaining data followed by refinements. First, we show that a simple uniform sampling from the original data is sufficient to get a representative subset with high probability. While the proposed framework allows a large class of algorithms to be used for clustering the sampled set, we focus on some popular parametric algorithms for ease of exposition. We then present algorithms to populate and refine the clusters. The algorithm for populating the clusters is based on a generalization of the stable marriage problem, whereas the refinement algorithm is a constrained iterative relocation scheme. The complexity of the overall method is O(kN log N) for obtaining k balanced clusters from N data points, which compares favorably with other existing techniques for balanced clustering. In addition to providing balancing guarantees, the clustering performance obtained using the proposed framework is comparable to and often better than the corresponding unconstrained solution. Experimental results on several datasets, including high-dimensional (>20,000) ones, are provided to demonstrate the efficacy of the proposed framework.", "title": "" }, { "docid": "6057638a2a1cfd07ab2e691baf93a468", "text": "Cybersecurity in smart grids is of critical importance given the heavy reliance of modern societies on electricity and the recent cyberattacks that resulted in blackouts. The evolution of the legacy electric grid to a smarter grid holds great promises but also comes up with an increasesd attack surface. In this article, we review state of the art developments in cybersecurity for smart grids, both from a standardization as well technical perspective. This work shows the important areas of future research for academia, and collaboration with government and industry stakeholders to enhance smart grid cybersecurity and make this new paradigm not only beneficial and valuable but also safe and secure.", "title": "" }, { "docid": "7499f88de9d2f76008dc38e96b08ca0a", "text": "Refractory and super-refractory status epilepticus (SE) are serious illnesses with a high risk of morbidity and even fatality. In the setting of refractory generalized convulsive SE (GCSE), there is ample justification to use continuous infusions of highly sedating medications—usually midazolam, pentobarbital, or propofol. Each of these medications has advantages and disadvantages, and the particulars of their use remain controversial. Continuous EEG monitoring is crucial in guiding the management of these critically ill patients: in diagnosis, in detecting relapse, and in adjusting medications. Forms of SE other than GCSE (and its continuation in a “subtle” or nonconvulsive form) should usually be treated far less aggressively, often with nonsedating anti-seizure drugs (ASDs). Management of “non-classic” NCSE in ICUs is very complicated and controversial, and some cases may require aggressive treatment. One of the largest problems in refractory SE (RSE) treatment is withdrawing coma-inducing drugs, as the prolonged ICU courses they prompt often lead to additional complications. In drug withdrawal after control of convulsive SE, nonsedating ASDs can assist; medical management is crucial; and some brief seizures may have to be tolerated. For the most refractory of cases, immunotherapy, ketamine, ketogenic diet, and focal surgery are among several newer or less standard treatments that can be considered. The morbidity and mortality of RSE is substantial, but many patients survive and even return to normal function, so RSE should be treated promptly and as aggressively as the individual patient and type of SE indicate.", "title": "" }, { "docid": "30740e33cdb2c274dbd4423e8f56405e", "text": "A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.", "title": "" }, { "docid": "3105a48f0b8e45857e8d48e26b258e04", "text": "Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.", "title": "" }, { "docid": "804920bbd9ee11cc35e93a53b58e7e79", "text": "Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries.", "title": "" }, { "docid": "25bb1034b836e68ac1939265e33b0e22", "text": "As it requires a huge number of parameters when exposed to high dimensional inputs in video detection and classification, there is a grand challenge to develop a compact yet accurate video comprehension at terminal devices. Current works focus on optimizations of video detection and classification in a separated fashion. In this paper, we introduce a video comprehension (object detection and action recognition) system for terminal devices, namely DEEPEYE. Based on You Only Look Once (YOLO), we have developed an 8-bit quantization method when training YOLO; and also developed a tensorized-compression method of Recurrent Neural Network (RNN) composed of features extracted from YOLO. The developed quantization and tensorization can significantly compress the original network model yet with maintained accuracy. Using the challenging video datasets: MOMENTS and UCF11 as benchmarks, the results show that the proposed DEEPEYE achieves 3.994× model compression rate with only 0.47% mAP decreased; and 15, 047× parameter reduction and 2.87× speed-up with 16.58% accuracy improvement.", "title": "" }, { "docid": "a1757ee58eb48598d3cd6e257b53cd10", "text": "This paper examines the issues of puzzle design in the context of collaborative gaming. The qualitative research approach involves both the conceptual analysis of key terminology and a case study of a collaborative game called eScape. The case study is a design experiment, involving both the process of designing a game environment and an empirical study, where data is collected using multiple methods. The findings and conclusions emerging from the analysis provide insight into the area of multiplayer puzzle design. The analysis and reflections answer questions on how to create meaningful puzzles requiring collaboration and how far game developers can go with collaboration design. The multiplayer puzzle design introduces a new challenge for game designers. Group dynamics, social roles and an increased level of interaction require changes in the traditional conceptual understanding of a single-player puzzle.", "title": "" }, { "docid": "8b519431416a4bac96a8a975d8973ef9", "text": "A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for the graph coloring problem. These algorithms combine a new class of highly specialized crossover operators and a well-known tabu search algorithm. Experiments of such a hybrid algorithm are carried out on large DIMACS Challenge benchmark graphs. Results prove very competitive with and even better than those of state-of-the-art algorithms. Analysis of the behavior of the algorithm sheds light on ways to further improvement.", "title": "" }, { "docid": "337ba912e6c23324ba2e996808a4b060", "text": "Comprehensive investigations were conducted on identifying integration efforts needed to adapt plasma dicing technology in BEOL pre-production environments. First, the authors identified the suitable process flows. Within the process flow, laser grooving before plasma dicing was shown to be a key unit process to control resulting die sidewall quality. Significant improvement on laser grooving quality has been demonstrated. Through these efforts, extremely narrow kerfs and near ideal dies strengths were achieved on bare Si dies. Plasma dicing process generates fluorinated polymer residues on both Si die sidewalls and under the topography overhangs on wafer surfaces, such as under the solder balls or microbumps. Certain areas cannot be cleaned by in-chamber post-treatments. Multiple cleaning methods demonstrated process capability and compatibility to singulated dies-on-tape handling. Lastly, although many methods exist commercially for backmetal and DAF separations, the authors' investigation is still inconclusive on one preferred process for post-plasma dicing die separations.", "title": "" }, { "docid": "ecb927989b504aa26fddca0c0ce76c76", "text": "This dissertation presents an integrated system for producing Natural Logic inferences, which are used in a wide variety of natural language understanding tasks. Natural Logic is the process of creating valid inferences by making incremental edits to natural language expressions with respect to a universal monotonicity calculus, without resorting to logical representation of the expressions (using First Order Logic for instance). The system generates inferences from surface forms using a three-stage process. First, each sentence is subjected to syntactic analysis, using a purpose-built syntactic parser. Then the rules of the monotonicity calculus are applied, specifying the directionality of entailment for each sentence constituents. A constituent can be either upward or downward entailing, meaning that we may replace it with a semantically broader or narrower term. Finally, we can find all suitable replacement terms for each target word by using the WordNet lexical database, which contains hypernymic and hyponymic relations. Using Combinatory Categorial Grammar, we were able to incorporate the monotonicity determination step in the syntactic derivation process. In order to achieve wide coverage over English sentences we had to introduce statistical models into our syntactic parser. At the current stage we have implemented a simple statistical model similar to those of Probabilistic Context-Free Grammars. The system is intended to provide input to “deep” reasoning engines, used for higher-level Natural Language Processing applications such as Recognising Textual Entailment. In addition to independently evaluating each component of the system, we present our comprehensive findings using Cyc, a large-scale knowledge base, and we outline a solution for its relatively limited concept coverage.", "title": "" }, { "docid": "96fb1910ed0127ad330fd427335b4587", "text": "OBJECTIVES\nThe aim of this cross-sectional in vivo study was to assess the effect of green tea and honey solutions on the level of salivary Streptococcus mutans.\n\n\nSTUDY DESIGN\nA convenient sample of 30 Saudi boys aged 7-10 years were randomly assigned into 2 groups of 15 each. Saliva sample was collected for analysis of level of S. mutans before rinsing. Commercial honey and green tea were prepared for use and each child was asked to rinse for two minutes using 10 mL of the prepared honey or green tea solutions according to their group. Saliva samples were collected again after rinsing. The collected saliva samples were prepared and colony forming unit (CFU) of S. mutans per mL of saliva was calculated.\n\n\nRESULTS\nThe mean number of S. mutans before and after rinsing with honey and green tea solutions were 2.28* 10(8)(2.622*10(8)), 5.64 *10(7)(1.03*10(8)), 1.17*10(9)(2.012*10(9)) and 2.59*10(8) (3.668*10(8)) respectively. A statistically significant reduction in the average number of S. mutans at baseline and post intervention in the children who were assigned to the honey (P=0.001) and green tea (P=0.001) groups was found.\n\n\nCONCLUSIONS\nA single time mouth rinsing with honey and green tea solutions for two minutes effectively reduced the number of salivary S. mutans of 7-10 years old boys.", "title": "" }, { "docid": "8de5b77f3cb4f1c20ff6cc11b323ba9c", "text": "The Internet of Things (IoT) paradigm refers to the network of physical objects or \"things\" embedded with electronics, software, sensors, and connectivity to enable objects to exchange data with servers, centralized systems, and/or other connected devices based on a variety of communication infrastructures. IoT makes it possible to sense and control objects creating opportunities for more direct integration between the physical world and computer-based systems. IoT will usher automation in a large number of application domains, ranging from manufacturing and energy management (e.g. SmartGrid), to healthcare management and urban life (e.g. SmartCity). However, because of its finegrained, continuous and pervasive data acquisition and control capabilities, IoT raises concerns about the security and privacy of data. Deploying existing data security solutions to IoT is not straightforward because of device heterogeneity, highly dynamic and possibly unprotected environments, and large scale. In this talk, after outlining key challenges in data security and privacy, we present initial approaches to securing IoT data, including efficient and scalable encryption protocols, software protection techniques for small devices, and fine-grained data packet loss analysis for sensor networks.", "title": "" }, { "docid": "c1b5b1dcbb3e7ff17ea6ad125bbc4b4b", "text": "This article focuses on a new type of wireless devices in the domain between RFIDs and sensor networks—Energy-Harvesting Active Networked Tags (EnHANTs). Future EnHANTs will be small, flexible, and self-powered devices that can be attached to objects that are traditionally not networked (e.g., books, furniture, toys, produce, and clothing). Therefore, they will provide the infrastructure for various tracking applications and can serve as one of the enablers for the Internet of Things. We present the design considerations for the EnHANT prototypes, developed over the past 4 years. The prototypes harvest indoor light energy using custom organic solar cells, communicate and form multihop networks using ultra-low-power Ultra-Wideband Impulse Radio (UWB-IR) transceivers, and dynamically adapt their communications and networking patterns to the energy harvesting and battery states. We describe a small-scale testbed that uniquely allows evaluating different algorithms with trace-based light energy inputs. Then, we experimentally evaluate the performance of different energy-harvesting adaptive policies with organic solar cells and UWB-IR transceivers. Finally, we discuss the lessons learned during the prototype and testbed design process.", "title": "" }, { "docid": "4147b26531ca1ec165735688481d2684", "text": "Problem-based approaches to learning have a long history of advocating experience-based education. Psychological research and theory suggests that by having students learn through the experience of solving problems, they can learn both content and thinking strategies. Problem-based learning (PBL) is an instructional method in which students learn through facilitated problem solving. In PBL, student learning centers on a complex problem that does not have a single correct answer. Students work in collaborative groups to identify what they need to learn in order to solve a problem. They engage in self-directed learning (SDL) and then apply their new knowledge to the problem and reflect on what they learned and the effectiveness of the strategies employed. The teacher acts to facilitate the learning process rather than to provide knowledge. The goals of PBL include helping students develop 1) flexible knowledge, 2) effective problem-solving skills, 3) SDL skills, 4) effective collaboration skills, and 5) intrinsic motivation. This article discusses the nature of learning in PBL and examines the empirical evidence supporting it. There is considerable research on the first 3 goals of PBL but little on the last 2. Moreover, minimal research has been conducted outside medical and gifted education. Understanding how these goals are achieved with less skilled learners is an important part of a research agenda for PBL. The evidence suggests that PBL is an instructional approach that offers the potential to help students develop flexible understanding and lifelong learning skills.", "title": "" }, { "docid": "1f7d0ccae4e9f0078eabb9d75d1a8984", "text": "A social network is composed by communities of individuals or organizations that are connected by a common interest. Online social networking sites like Twitter, Facebook and Orkut are among the most visited sites in the Internet. Presently, there is a great interest in trying to understand the complexities of this type of network from both theoretical and applied point of view. The understanding of these social network graphs is important to improve the current social network systems, and also to develop new applications. Here, we propose a friend recommendation system for social network based on the topology of the network graphs. The topology of network that connects a user to his friends is examined and a local social network called Oro-Aro is used in the experiments. We developed an algorithm that analyses the sub-graph composed by a user and all the others connected people separately by three degree of separation. However, only users separated by two degree of separation are candidates to be suggested as a friend. The algorithm uses the patterns defined by their connections to find those users who have similar behavior as the root user. The recommendation mechanism was developed based on the characterization and analyses of the network formed by the user's friends and friends-of-friends (FOF).", "title": "" }, { "docid": "3b78988b74c2e42827c9e75e37d2223e", "text": "This paper addresses how to construct a RBAC-compatible attribute-based encryption (ABE) for secure cloud storage, which provides a user-friendly and easy-to-manage security mechanism without user intervention. Similar to role hierarchy in RBAC, attribute lattice introduced into ABE is used to define a seniority relation among all values of an attribute, whereby a user holding the senior attribute values acquires permissions of their juniors. Based on these notations, we present a new ABE scheme called Attribute-Based Encryption with Attribute Lattice (ABE-AL) that provides an efficient approach to implement comparison operations between attribute values on a poset derived from attribute lattice. By using bilinear groups of composite order, we propose a practical construction of ABE-AL based on forward and backward derivation functions. Compared with prior solutions, our scheme offers a compact policy representation solution, which can significantly reduce the size of privatekeys and ciphertexts. Furthermore, our solution provides a richer expressive power of access policies to facilitate flexible access control for ABE scheme.", "title": "" }, { "docid": "de394e291cac1a56cb19d858014bff19", "text": "The design of antennas for metal-mountable radio-frequency identification tags is driven by a unique set of challenges: cheap, small, low-profile, and conformal structures need to provide reliable operation when tags are mounted on conductive platforms of various shapes and sizes. During the past decade, a tremendous amount of research has been dedicated to meeting these stringent requirements. Currently, the tag-reading ranges of several meters are achieved with flexible-label types of tags. Moreover, a whole spectrum of tag-size performance ratios has been demonstrated through a variety of innovative antenna-design approaches. This article reviews and summarizes the progress made in antennas for metal-mountable tags, and presents future prospects.", "title": "" }, { "docid": "c1713b817c4b2ce6e134b6e0510a961f", "text": "BACKGROUND\nEntity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a large number of traditional machine learning methods, such as support vector machine and conditional random field, have been deployed to recognize entities from clinical texts in the past few years. In recent years, recurrent neural network (RNN), one of deep learning methods that has shown great potential on many problems including named entity recognition, also has been gradually used for entity recognition from clinical texts.\n\n\nMETHODS\nIn this paper, we comprehensively investigate the performance of LSTM (long-short term memory), a representative variant of RNN, on clinical entity recognition and protected health information recognition. The LSTM model consists of three layers: input layer - generates representation of each word of a sentence; LSTM layer - outputs another word representation sequence that captures the context information of each word in this sentence; Inference layer - makes tagging decisions according to the output of LSTM layer, that is, outputting a label sequence.\n\n\nRESULTS\nExperiments conducted on corpora of the 2010, 2012 and 2014 i2b2 NLP challenges show that LSTM achieves highest micro-average F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification, which is considerably competitive with other state-of-the-art systems.\n\n\nCONCLUSIONS\nLSTM that requires no hand-crafted feature has great potential on entity recognition from clinical texts. It outperforms traditional machine learning methods that suffer from fussy feature engineering. A possible future direction is how to integrate knowledge bases widely existing in the clinical domain into LSTM, which is a case of our future work. Moreover, how to use LSTM to recognize entities in specific formats is also another possible future direction.", "title": "" } ]
scidocsrr
50e30807cc5bac0a89ecac10859ef6c9
Metamorphic Testing and Testing with Special Values
[ { "docid": "421cb7fb80371c835a5d314455fb077c", "text": "This paper explains, in an introductory fashion, the method of specifying the correct behavior of a program by the use of input/output assertions and describes one method for showing that the program is correct with respect to those assertions. An initial assertion characterizes conditions expected to be true upon entry to the program and a final assertion characterizes conditions expected to be true upon exit from the program. When a program contains no branches, a technique known as symbolic execution can be used to show that the truth of the initial assertion upon entry guarantees the truth of the final assertion upon exit. More generally, for a program with branches one can define a symbolic execution tree. If there is an upper bound on the number of times each loop in such a program may be executed, a proof of correctness can be given by a simple traversal of the (finite) symbolic execution tree. However, for most programs, no fixed bound on the number of times each loop is executed exists and the corresponding symbolic execution trees are infinite. In order to prove the correctness of such programs, a more general assertion structure must be provided. The symbolic execution tree of such programs must be traversed inductively rather than explicitly. This leads naturally to the use of additional assertions which are called \"inductive assertions.\"", "title": "" } ]
[ { "docid": "f79e5a2b19bb51e8dc0017342a153fee", "text": "Decentralized ledger-based cryptocurrencies like Bitcoin present a way to construct payment systems without trusted banks. However, the anonymity of Bitcoin is fragile. Many altcoins and protocols are designed to improve Bitcoin on this issue, among which Zerocash is the first fullfledged anonymous ledger-based currency, using zero-knowledge proof, specifically zk-SNARK, to protect privacy. However, Zerocash suffers two problems: poor scalability and low efficiency. In this paper, we address the above issues by constructing a micropayment system in Zerocash called Z-Channel. First, we improve Zerocash to support multisignature and time lock functionalities, and prove that the reconstructed scheme is secure. Then we construct Z-Channel based on the improved Zerocash scheme. Our experiments demonstrate that Z-Channel significantly improves the scalability and reduces the confirmation time for Zerocash payments.", "title": "" }, { "docid": "28ab07763d682ae367b5c9ebd9c9ef13", "text": "Nowadays, the teaching-learning processes are constantly changing, one of the latest modifications promises to strengthen the development of digital skills and thinking in the participants, from an early age. In this sense, the present article shows the advances of a study oriented to the formation of programming abilities, computational thinking and collaborative learning in an initial education context. As part of the study it was initially proposed to conduct a training day for teachers who will participate in the experimental phase of the research, considering this human resource as a link of great importance to achieve maximum use of students in the development of curricular themes of the level, using ICT resources and programmable educational robots. The criterion and the positive acceptance expressed by the teaching group after the evaluation applied at the end of the session, constitute a good starting point for the development of the following activities that make up the research in progress.", "title": "" }, { "docid": "4e847c4acec420ef833a08a17964cb28", "text": "Machine learning models are vulnerable to adversarial examples, inputs maliciously perturbed to mislead the model. These inputs transfer between models, thus enabling black-box attacks against deployed models. Adversarial training increases robustness to attacks by injecting adversarial examples into training data. Surprisingly, we find that although adversarially trained models exhibit strong robustness to some white-box attacks (i.e., with knowledge of the model parameters), they remain highly vulnerable to transferred adversarial examples crafted on other models. We show that the reason for this vulnerability is the model’s decision surface exhibiting sharp curvature in the vicinity of the data points, thus hindering attacks based on first-order approximations of the model’s loss, but permitting black-box attacks that use adversarial examples transferred from another model. We harness this observation in two ways: First, we propose a simple yet powerful novel attack that first applies a small random perturbation to an input, before finding the optimal perturbation under a first-order approximation. Our attack outperforms prior “single-step” attacks on models trained with or without adversarial training. Second, we propose Ensemble Adversarial Training, an extension of adversarial training that additionally augments training data with perturbed inputs transferred from a number of fixed pre-trained models. On MNIST and ImageNet, ensemble adversarial training vastly improves robustness to black-box attacks.", "title": "" }, { "docid": "b429b37623a690cd4b224a334985f7dd", "text": "Data centers play a key role in the expansion of cloud computing. However, the efficiency of data center networks is limited by oversubscription. The typical unbalanced traffic distributions of a DCN further aggravate the problem. Wireless networking, as a complementary technology to Ethernet, has the flexibility and capability to provide feasible approaches to handle the problem. In this article, we analyze the challenges of DCNs and articulate the motivations of employing wireless in DCNs. We also propose a hybrid Ethernet/wireless DCN architecture and a mechanism to dynamically schedule wireless transmissions based on traffic demands. Our simulation study demonstrates the effectiveness of the proposed wireless DCN.", "title": "" }, { "docid": "17db3273504bba730c9e43c8ea585250", "text": "In this paper, License plate localization and recognition (LPLR) is presented. It uses image processing and character recognition technology in order to identify the license number plates of the vehicles automatically. This system is considerable interest because of its good application in traffic monitoring systems, surveillance devices and all kind of intelligent transport system. The objective of this work is to design algorithm for License Plate Localization and Recognition (LPLR) of Tanzanian License Plates. The plate numbers used are standard ones with black and yellow or black and white colors. Also, the letters and numbers are placed in the same row (identical vertical levels), resulting in frequent changes in the horizontal intensity. Due to that, the horizontal changes of the intensity have been easily detected, since the rows that contain the number plates are expected to exhibit many sharp variations. Hence, the edge finding method is exploited to find the location of the plate. To increase readability of the plate number, part of the image was enhanced, noise removal and smoothing median filter is used due to easy development. The algorithm described in this paper is implemented using MATLAB 7.11.0(R2010b).", "title": "" }, { "docid": "080f29a336c0188eeec82d27aa80092c", "text": "Do physically attractive individuals truly possess a multitude of better characteristics? The current study aimed to answer the age old question, “Do looks matter?” within the context of online dating and framed itself using cursory research performed by Brand and colleagues (2012). Good Genes Theory, Halo Effect, Physical Attractiveness Stereotype, and Social Information Procession theory were also used to explore what function appearance truly plays in online dating and how it influences a user’s written text. 83 men were surveyed and asked to rate 84 women’s online dating profiles (photos and texts) independently of one another to determine if those who were perceived as physically attractive also wrote more attractive texts as well. Results indicated that physical attractiveness was correlated with text attractiveness but not with text confidence. Findings also indicated the more attractive a woman’s photo, the less discrepancy there was between her photo attractiveness and text attractiveness scores. Finally, photo attractiveness did not differ significantly for men’s ratings of women in this study and women’s ratings of men in the Brand et al. (2012) study.", "title": "" }, { "docid": "ce0cfd1dd69e235f942b2e7583b8323b", "text": "Increasing use of the World Wide Web as a B2C commercial tool raises interest in understanding the key issues in building relationships with customers on the Internet. Trust is believed to be the key to these relationships. Given the differences between a virtual and a conventional marketplace, antecedents and consequences of trust merit re-examination. This research identifies a number of key factors related to trust in the B2C context and proposes a framework based on a series of underpinning relationships among these factors. The findings in this research suggest that people are more likely to purchase from the web if they perceive a higher degree of trust in e-commerce and have more experience in using the web. Customer’s trust levels are likely to be influenced by the level of perceived market orientation, site quality, technical trustworthiness, and user’s web experience. People with a higher level of perceived site quality seem to have a higher level of perceived market orientation and trustworthiness towards e-commerce. Furthermore, people with a higher level of trust in e-commerce are more likely to participate in e-commerce. Positive ‘word of mouth’, money back warranty and partnerships with well-known business partners, rank as the top three effective risk reduction tactics. These findings complement the previous findings on e-commerce and shed light on how to establish a trust relationship on the World Wide Web.  2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "09581c79829599090d8f838416058c05", "text": "This paper proposes to tackle the AMR parsing bottleneck by improving two components of an AMR parser: concept identification and alignment. We first build a Bidirectional LSTM based concept identifier that is able to incorporate richer contextual information to learn sparse AMR concept labels. We then extend an HMM-based word-to-concept alignment model with graph distance distortion and a rescoring method during decoding to incorporate the structural information in the AMR graph. We show integrating the two components into an existing AMR parser results in consistently better performance over the state of the art on various datasets.", "title": "" }, { "docid": "112b9294f4d606a0112fe80742698184", "text": "Peer-to-peer systems are typically designed around the assumption that all peers will willingly contribute resources to a global pool. They thus suffer from freeloaders, that is, participants who consume many more resources than they contribute. In this paper, we propose a general economic framework for avoiding freeloaders in peer-to-peer systems. Our system works by keeping track of the resource consumption and resource contribution of each participant. The overall standing of each participant in the system is represented by a single scalar value, called their ka ma. A set of nodes, called a bank-set , keeps track of each node’s karma, increasing it as resources are contributed, and decreasing it as they are consumed. Our framework is resistant to malicious attempts by the resource provider, consumer, and a fraction of the members of the bank set. We illustrate the application of this framework to a peer-to-peer filesharing application.", "title": "" }, { "docid": "945553f360d7f569f15d249dbc5fa8cd", "text": "One of the main issues in service collaborations among business partners is the possible lack of trust among them. A promising approach to cope with this issue is leveraging on blockchain technology by encoding with smart contracts the business process workflow. This brings the benefits of trust decentralization, transparency, and accountability of the service composition process. However, data in the blockchain are public, implying thus serious consequences on confidentiality and privacy. Moreover, smart contracts can access data outside the blockchain only through Oracles, which might pose new confidentiality risks if no assumptions are made on their trustworthiness. For these reasons, in this paper, we are interested in investigating how to ensure data confidentiality during business process execution on blockchain even in the presence of an untrusted Oracle.", "title": "" }, { "docid": "518cb733bfbb746315498c1409d118c5", "text": "BACKGROUND\nAndrogenetic alopecia (AGA) is a common form of scalp hair loss that affects up to 50% of males between 18 and 40 years old. Several molecules are commonly used for the treatment of AGA, acting on different steps of its pathogenesis (Minoxidil, Finasteride, Serenoa repens) and show some side effects. In literature, on the basis of hypertrichosis observed in patients treated with analogues of prostaglandin PGF2a, it was supposed that prostaglandins would have an important role in the hair growth: PGE and PGF2a play a positive role, while PGD2 a negative one.\n\n\nOBJECTIVE\nWe carried out a pilot study to evaluate the efficacy of topical cetirizine versus placebo in patients with AGA.\n\n\nPATIENTS AND METHODS\nA sample of 85 patients was recruited, of which 67 were used to assess the effectiveness of the treatment with topical cetirizine, while 18 were control patients.\n\n\nRESULTS\nWe found that the main effect of cetirizine was an increase in total hair density, terminal hair density and diameter variation from T0 to T1, while the vellus hair density shows an evident decrease. The use of a molecule as cetirizine, with no notable side effects, makes possible a good compliance by patients.\n\n\nCONCLUSION\nOur results have shown that topical cetirizine 1% is responsible for a significant improvement of the initial framework of AGA.", "title": "" }, { "docid": "b3fce50260d7f77e8ca294db9c6666f6", "text": "Nanotechnology is enabling the development of devices in a scale ranging from one to a few hundred nanometers. Coordination and information sharing among these nano-devices will lead towards the development of future nanonetworks, boosting the range of applications of nanotechnology in the biomédical, environmental and military fields. Despite the major progress in nano-device design and fabrication, it is still not clear how these atomically precise machines will communicate. Recently, the advancements in graphene-based electronics have opened the door to electromagnetic communications in the nano-scale. In this paper, a new quantum mechanical framework is used to analyze the properties of Carbon Nanotubes (CNTs) as nano-dipole antennas. For this, first the transmission line properties of CNTs are obtained using the tight-binding model as functions of the CNT length, diameter, and edge geometry. Then, relevant antenna parameters such as the fundamental resonant frequency and the input impedance are calculated and compared to those of a nano-patch antenna based on a Graphene Nanoribbon (GNR) with similar dimensions. The results show that for a maximum antenna size in the order of several hundred nanometers (the expected maximum size for a nano-device), both a nano-dipole and a nano-patch antenna will be able to radiate electromagnetic waves in the terahertz band (0.1–10.0 THz).", "title": "" }, { "docid": "85d31f3940ee258589615661e596211d", "text": "Bulk Synchronous Parallelism (BSP) provides a good model for parallel processing of many large-scale graph applications, however it is unsuitable/inefficient for graph applications that require coordination, such as graph-coloring, subcoloring, and clustering. To address this problem, we present an efficient modification to the BSP model to implement serializability (sequential consistency) without reducing the highlyparallel nature of BSP. Our modification bypasses the message queues in BSP and reads directly from the worker’s memory for the internal vertex executions. To ensure serializability, coordination is performed— implemented via dining philosophers or token ring— only for border vertices partitioned across workers. We implement our modifications to BSP on Giraph, an open-source clone of Google’s Pregel. We show through a graph-coloring application that our modified framework, Giraphx, provides much better performance than implementing the application using dining-philosophers over Giraph. In fact, Giraphx outperforms Giraph even for embarrassingly parallel applications that do not require coordination, e.g., PageRank.", "title": "" }, { "docid": "4db8a0d39ef31b49f2b6d542a14b03a2", "text": "Climate-smart agriculture is one of the techniques that maximizes agricultural outputs through proper management of inputs based on climatological conditions. Real-time weather monitoring system is an important tool to monitor the climatic conditions of a farm because many of the farms related problems can be solved by better understanding of the surrounding weather conditions. There are various designs of weather monitoring stations based on different technological modules. However, different monitoring technologies provide different data sets, thus creating vagueness in accuracy of the weather parameters measured. In this paper, a weather station was designed and deployed in an Edamame farm, and its meteorological data are compared with the commercial Davis Vantage Pro2 installed at the same farm. The results show that the lab-made weather monitoring system is equivalently efficient to measure various weather parameters. Therefore, the designed system welcomes low-income farmers to integrate it into their climate-smart farming practice.", "title": "" }, { "docid": "074de6f0c250f5c811b69598551612e4", "text": "In this paper we present a novel GPU-friendly real-time voxelization technique for rendering homogeneous media that is defined by particles, e.g. fluids obtained from particle-based simulations such as Smoothed Particle Hydrodynamics (SPH). Our method computes view-adaptive binary voxelizations with on-the-fly compression of a tiled perspective voxel grid, achieving higher resolutions than previous approaches. It allows for interactive generation of realistic images, enabling advanced rendering techniques such as ray casting-based refraction and reflection, light scattering and absorption, and ambient occlusion. In contrast to previous methods, it does not rely on preprocessing such as expensive, and often coarse, scalar field conversion or mesh generation steps. Our method directly takes unsorted particle data as input. It can be further accelerated by identifying fully populated simulation cells during simulation. The extracted surface can be filtered to achieve smooth surface appearance.", "title": "" }, { "docid": "099dbf8d4c0b401cd3389583eb4495f3", "text": "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8% mAP, underscoring the need for developing new approaches for video understanding.", "title": "" }, { "docid": "848dd074e4615ea5ecb164c96fac6c63", "text": "A simultaneous analytical method for etizolam and its main metabolites (alpha-hydroxyetizolam and 8-hydroxyetizolam) in whole blood was developed using solid-phase extraction, TMS derivatization and ion trap gas chromatography tandem mass spectrometry (GC-MS/MS). Separation of etizolam, TMS derivatives of alpha-hydroxyetizolam and 8-hydroxyetizolam and fludiazepam as internal standard was performed within about 17 min. The inter-day precision evaluated at the concentration of 50 ng/mL etizolam, alpha-hydroxyetizolam and 8-hydroxyetizolam was evaluated 8.6, 6.4 and 8.0% respectively. Linearity occurred over the range in 5-50 ng/mL. This method is satisfactory for clinical and forensic purposes. This method was applied to two unnatural death cases suspected to involve etizolam. Etizolam and its two metabolites were detected in these cases.", "title": "" }, { "docid": "5a805b6f9e821b7505bccc7b70fdd557", "text": "There are many factors that influence the translators while translating a text. Amongst these factors is the notion of ideology transmission through the translated texts. This paper is located within the framework of Descriptive Translation Studies (DTS) and Critical Discourse Analysis (CDA). It investigates the notion of ideology with particular use of critical discourse analysis. The purpose is to highlight the relationship between language and ideology in translated texts. It also aims at discovering whether the translator’s socio-cultural and ideology constraints influence the production of his/her translations. As a mixed research method study, the corpus consists of two different Arabic translated versions of the English book “Media Control” by Noam Chomsky. The micro-level contains the qualitative stage where detailed description and comparison -contrastive and comparativeanalysis will be provided. The micro-level analysis should include the lexical items along with the grammatical items (passive verses. active, nominalisation vs. de-nominalisation, moralisation and omission vs. addition). In order to have more reliable and objective data, computed frequencies of the ideological significance occurrences along with percentage and Chi-square formula were conducted through out the data analysis stage which then form the quantitative part of the current study. The main objective of the mentioned data analysis methodologies is to find out the dissimilarity between the proportions of the information obtained from the target texts (TTs) and their equivalent at the source text (ST). The findings indicts that there are significant differences amongst the two TTs in relation to International Journal of Linguistics ISSN 1948-5425 2014, Vol. 6, No. 3 www.macrothink.org/ijl 119 the word choices including the lexical items and the other syntactic structure compared by the ST. These significant differences indicate some ideological transmission through translation process of the two TTs. Therefore, and to some extent, it can be stated that the differences were also influenced by the translators’ socio-cultural and ideological constraints.", "title": "" }, { "docid": "dc3de555216f10d84890ecb1165774ff", "text": "Research into the visual perception of human emotion has traditionally focused on the facial expression of emotions. Recently researchers have turned to the more challenging field of emotional body language, i.e. emotion expression through body pose and motion. In this work, we approach recognition of basic emotional categories from a computational perspective. In keeping with recent computational models of the visual cortex, we construct a biologically plausible hierarchy of neural detectors, which can discriminate seven basic emotional states from static views of associated body poses. The model is evaluated against human test subjects on a recent set of stimuli manufactured for research on emotional body language.", "title": "" }, { "docid": "93c84b6abfe30ff7355e4efc310b440b", "text": "Parallel file systems (PFS) are widely-used in modern computing systems to mask the ever-increasing performance gap between computing and data access. PFSs favor large requests, and do not work well for small requests, especially small random requests. Newer Solid State Drives (SSD) have excellent performance on small random data accesses, but also incur a high monetary cost. In this study, we propose a hybrid architecture named the Smart Selective SSD Cache (S4D-Cache), which employs a small set of SSD-based file servers as a selective cache of conventional HDD-based file servers. A novel scheme is introduced to identify performance-critical data, and conduct selective cache admission to fully utilize the hybrid architecture in terms of data-access parallelism and randomness. We have implemented an S4D-Cache under the MPI-IO and PVFS2 parallel file system. Our experiments show that S4D-Cache can significantly improve I/O throughput, and is a promising approach for parallel applications.", "title": "" } ]
scidocsrr
d7bf8a79235036e6858e9e8354089a9c
From Abstraction to Implementation: Can Computational Thinking Improve Complex Real-World Problem Solving? A Computational Thinking-Based Approach to the SDGs
[ { "docid": "b64a91ca7cdeb3dfbe5678eee8962aa7", "text": "Computational thinking is gaining recognition as an important skill set for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course within the curriculum, and there is little consensus on what exactly computational thinking entails and how to teach and evaluate it. To address these concerns, we have developed a computational thinking framework to be used as a planning and evaluative tool. Within this framework, we aim to unify the differing opinions about what computational thinking should involve. As a case study, we have applied the framework to Light-Bot, an educational game with a strong focus on programming, and found that the framework provides us with insight into the usefulness of the game to reinforce computer science concepts.", "title": "" } ]
[ { "docid": "c4b1615bbd32f99fa59ca2d7b8c40b10", "text": "Practical face recognition systems are sometimes confronted with low-resolution face images. Traditional two-step methods solve this problem through employing super-resolution (SR). However, these methods usually have limited performance because the target of SR is not absolutely consistent with that of face recognition. Moreover, time-consuming sophisticated SR algorithms are not suitable for real-time applications. To avoid these limitations, we propose a novel approach for LR face recognition without any SR preprocessing. Our method based on coupled mappings (CMs), projects the face images with different resolutions into a unified feature space which favors the task of classification. These CMs are learned through optimizing the objective function to minimize the difference between the correspondences (i.e., low-resolution image and its high-resolution counterpart). Inspired by locality preserving methods for dimensionality reduction, we introduce a penalty weighting matrix into our objective function. Our method significantly improves the recognition performance. Finally, we conduct experiments on publicly available databases to verify the efficacy of our algorithm.", "title": "" }, { "docid": "4798cb0bcd147e6a49135b845d7f2624", "text": "There is an upsurging interest in designing succinct data structures for basic searching problems (see [23] and references therein). The motivation has to be found in the exponential increase of electronic data nowadays available which is even surpassing the significant increase in memory and disk storage capacities of current computers. Space reduction is an attractive issue because it is also intimately related to performance improvements as noted by several authors (e.g. Knuth [15], Bentley [5]). In designing these implicit data structures the goal is to reduce as much as possible the auxiliary information kept together with the input data without introducing a significant slowdown in the final query performance. Yet input data are represented in their entirety thus taking no advantage of possible repetitiveness into them. The importance of those issues is well known to programmers who typically use various tricks to squeeze data as much as possible and still achieve good query performance. Their approaches, though, boil down to heuristics whose effectiveness is witnessed only by experimentation. In this paper, we address the issue of compressing and indexing data by studying it in a theoretical framework. We devise a novel data structure for indexing and searching whose space occupancy is a function of the entropy of the underlying data set. The novelty resides in the careful combination of a compression algorithm, proposed by Burrows and Wheeler [7], with the structural properties of a well known indexing tool, the Suffix Array [17]. We call the data structure opportunistic since its space occupancy is decreased when the input is compressible at no significant slowdown in the query performance. More precisely, its space occupancy is optimal in an information-content sense because a text T [1, u] is stored using O(Hk(T )) + o(1) bits per input symbol, where Hk(T ) is the kth order entropy of T (the bound holds for any fixed k). Given an arbitrary string P [1, p], the opportunistic data structure allows to search for the occ occurrences of P in T requiring O(p+occ log u) time complexity (for any fixed > 0). If data are uncompressible we achieve the best space bound currently known [11]; on compressible data our solution improves the succinct suffix array of [11] and the classical suffix tree and suffix array data structures either in space or in query time complexity or both. It is a belief [27] that some space overhead should be paid to use full-text indices (like suffix trees or suffix arrays) with respect to word-based indices (like inverted lists). The results in this paper show that a full-text index may achieve sublinear space overhead on compressible texts. As an application we devise a variant of the well-known Glimpse tool [18] which achieves sublinear space and sublinear query time complexity. Conversely, inverted lists achieve only the second goal [27], and classical Glimpse achieves both goals but under some restrictive conditions [4]. Finally, we investigate the modifiability of our opportunistic data structure by studying how to choreograph its basic ideas with a dynamic setting thus achieving effective searching and updating time bounds. ∗Dipartimento di Informatica, Università di Pisa, Italy. E-mail: ferragin@di.unipi.it. †Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale, Alessandria, Italy and IMC-CNR, Pisa, Italy. E-mail: manzini@mfn.unipmn.it.", "title": "" }, { "docid": "67af0ebeebec40efa792a010ce205890", "text": "We present a near-optimal polynomial-time approximation algorithm for the asymmetric traveling salesman problem for graphs of bounded orientable or non-orientable genus. Given any algorithm that achieves an approximation ratio of f(n) on arbitrary n-vertex graphs as a black box, our algorithm achieves an approximation factor of O(f(g)) on graphs with genus g. In particular, the O(log n/loglog n)-approximation algorithm for general graphs by Asadpour et al. [SODA 2010] immediately implies an O(log g/loglog g)-approximation algorithm for genus-g graphs. Moreover, recent results on approximating the genus of graphs imply that our O(log g/loglog g)-approximation algorithm can be applied to bounded-degree graphs even if no genus-g embedding of the graph is given. Our result improves and generalizes the o(√ g log g)-approximation algorithm of Oveis Gharan and Saberi [SODA 2011], which applies only to graphs with orientable genus g and requires a genus-g embedding as part of the input, even for bounded-degree graphs. Finally, our techniques yield a O(1)-approximation algorithm for ATSP on graphs of genus g with running time 2O(g) · nO(1).", "title": "" }, { "docid": "113b8cfda23cf7e8b3d7b4821d549bf7", "text": "A load dependent zero-current detector is proposed in this paper for speeding up the transient response when load current changes from heavy to light loads. The fast transient control signal determines how long the reversed inductor current according to sudden load variations. At the beginning of load variation from heavy to light loads, the sensed voltage compared with higher voltage to discharge the overshoot output voltage for achieving fast transient response. Besides, for an adaptive reversed current period, the fast transient mechanism is turned off since the output voltage is rapidly regulated back to the acceptable level. Simulation results demonstrate that the ZCD circuit permits the reverse current flowing back into n-type power MOSFET at the beginning of load variations. The settling time is decreased to about 35 mus when load current suddenly changes from 500mA to 10 mA.", "title": "" }, { "docid": "62af709fd559596f6d3d7a52902d5da5", "text": "This paper presents the results of several large-scale studies of face recognition employing visible light and infra-red (IR) imagery in the context of principal component analysis. We find that in a scenario involving time lapse between gallery and probe, and relatively controlled lighting, (1) PCA-based recognition using visible light images outperforms PCA-based recognition using infra-red images, (2) the combination of PCA-based recognition using visible light and infra-red imagery substantially outperforms either one individually. In a same session scenario (i.e. nearsimultaneous acquisition of gallery and probe images) neither modality is significantly better than the other. These experimental results reinforce prior research that employed a smaller data set, presenting a convincing argument that, even across a broad experimental spectrum, the behaviors enumerated above are valid and consistent.", "title": "" }, { "docid": "82ca6a400bf287dc287df9fa751ddac2", "text": "Research on ontology is becoming increasingly widespread in the computer science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We shall use the generic term “information systems”, in its broadest sense, to collectively refer to these application perspectives. We argue in this paper that so-called ontologies present their own methodological and architectural peculiarities: on the methodological side, their main peculiarity is the adoption of a highly interdisciplinary approach, while on the architectural side the most interesting aspect is the centrality of the role they can play in an information system, leading to the perspective of ontology-driven information systems.", "title": "" }, { "docid": "715de052c6a603e3c8a572531920ecfa", "text": "Muscle samples were obtained from the gastrocnemius of 17 female and 23 male track athletes, 10 untrained women, and 11 untrained men. Portions of the specimen were analyzed for total phosphorylase, lactic dehydrogenase (LDH), and succinate dehydrogenase (SDH) activities. Sections of the muscle were stained for myosin adenosine triphosphatase, NADH2 tetrazolium reductase, and alpha-glycerophosphate dehydrogenase. Maximal oxygen uptake (VO2max) was measured on a treadmill for 23 of the volunteers (6 female athletes, 11 male athletes, 10 untrained women, and 6 untrained men). These measurements confirm earlier reports which suggest that the athlete's preference for strength, speed, and/or endurance events is in part a matter of genetic endowment. Aside from differences in fiber composition and enzymes among middle-distance runners, the only distinction between the sexes was the larger fiber areas of the male athletes. SDH activity was found to correlate 0.79 with VO2max, while muscle LDH appeared to be a function of muscle fiber composition. While sprint- and endurance-trained athletes are characterized by distinct fiber compositions and enzyme activities, participants in strength events (e.g., shot-put) have relatively low muscle enzyme activities and a variety of fiber compositions.", "title": "" }, { "docid": "903b68096d2559f0e50c38387260b9c8", "text": "Vitamin C in humans must be ingested for survival. Vitamin C is an electron donor, and this property accounts for all its known functions. As an electron donor, vitamin C is a potent water-soluble antioxidant in humans. Antioxidant effects of vitamin C have been demonstrated in many experiments in vitro. Human diseases such as atherosclerosis and cancer might occur in part from oxidant damage to tissues. Oxidation of lipids, proteins and DNA results in specific oxidation products that can be measured in the laboratory. While these biomarkers of oxidation have been measured in humans, such assays have not yet been validated or standardized, and the relationship of oxidant markers to human disease conditions is not clear. Epidemiological studies show that diets high in fruits and vegetables are associated with lower risk of cardiovascular disease, stroke and cancer, and with increased longevity. Whether these protective effects are directly attributable to vitamin C is not known. Intervention studies with vitamin C have shown no change in markers of oxidation or clinical benefit. Dose concentration studies of vitamin C in healthy people showed a sigmoidal relationship between oral dose and plasma and tissue vitamin C concentrations. Hence, optimal dosing is critical to intervention studies using vitamin C. Ideally, future studies of antioxidant actions of vitamin C should target selected patient groups. These groups should be known to have increased oxidative damage as assessed by a reliable biomarker or should have high morbidity and mortality due to diseases thought to be caused or exacerbated by oxidant damage.", "title": "" }, { "docid": "154c40c2fab63ad15ded9b341ff60469", "text": "ICU mortality risk prediction may help clinicians take effective interventions to improve patient outcome. Existing machine learning approaches often face challenges in integrating a comprehensive panel of physiologic variables and presenting to clinicians interpretable models. We aim to improve both accuracy and interpretability of prediction models by introducing Subgraph Augmented Non-negative Matrix Factorization (SANMF) on ICU physiologic time series. SANMF converts time series into a graph representation and applies frequent subgraph mining to automatically extract temporal trends. We then apply non-negative matrix factorization to group trends in a way that approximates patient pathophysiologic states. Trend groups are then used as features in training a logistic regression model for mortality risk prediction, and are also ranked according to their contribution to mortality risk. We evaluated SANMF against four empirical models on the task of predicting mortality or survival 30 days after discharge from ICU using the observed physiologic measurements between 12 and 24 hours after admission. SANMF outperforms all comparison models, and in particular, demonstrates an improvement in AUC (0.848 vs. 0.827, p<0.002) compared to a state-of-the-art machine learning method that uses manual feature engineering. Feature analysis was performed to illuminate insights and benefits of subgraph groups in mortality risk prediction.", "title": "" }, { "docid": "b456ef31418fbe2a82bac60045a57fc2", "text": "Continuous blood pressure (BP) monitoring in a noninvasive and unobtrusive way can significantly improve the awareness, control and treatment rate of prevalent hypertension. Pulse transit time (PTT) has become increasingly popular in recent years for continuous BP measurement without a cuff. However, the accuracy issue of PTT-based method remains to be solved for clinical application. Some previous studies have attempted to estimate BP with only PTT by using linear regression, which is susceptible to arterial regulation and may not reflect the actual relationship between PTT and BP. Furthermore, PTT does not contain all the information of BP variation, thereby resulting in unsatisfactory accuracy. In this paper we establish a cuffless BP estimation model from a physiological perspective by utilizing PTT and photoplethysmogram (PPG) intensity ratio (PIR), an indicator we have recently proposed for evaluation of the change in arterial diameter and the low frequency variation of BP, with the consideration that PIR can track changes in mean BP (MBP) and arterial diameter change. The performance of the proposed BP model was evaluated by comparing the estimated BP with Finapres BP as reference on 10 healthy subjects. The results showed that the mean ± standard deviation (SD) of the estimation error for systolic and diastolic BP were -0.41 ± 5.15 and -0.84 ± 4.05 mmHg, and mean absolute difference (MAD) were 4.18 and 3.43 mmHg, respectively. Furthermore, the proposed modeling method was superior to one contrast PTT-based method, demonstrating the proposed model would be promising for reliable continuous cuffless BP measurement.", "title": "" }, { "docid": "9876e4298f674a617f065f348417982a", "text": "On the basis of medical officers diagnosis, thirty three (N = 33) hypertensives, aged 35-65 years, from Govt. General Hospital, Pondicherry, were examined with four variables viz, systolic and diastolic blood pressure, pulse rate and body weight. The subjects were randomly assigned into three groups. The exp. group-I underwent selected yoga practices, exp. group-II received medical treatment by the physician of the said hospital and the control group did not participate in any of the treatment stimuli. Yoga imparted in the morning and in the evening with 1 hr/session. day-1 for a total period of 11-weeks. Medical treatment comprised drug intake every day for the whole experimental period. The result of pre-post test with ANCOVA revealed that both the treatment stimuli (i.e., yoga and drug) were effective in controlling the variables of hypertension.", "title": "" }, { "docid": "bbb06abacfd8f4eb01fac6b11a4447bf", "text": "In this paper, we present a novel tightly-coupled monocular visual-inertial Simultaneous Localization and Mapping algorithm following an inertial assisted Kalman Filter and reusing the estimated 3D map. By leveraging an inertial assisted Kalman Filter, we achieve an efficient motion tracking bearing fast dynamic movement in the front-end. To enable place recognition and reduce the trajectory estimation drift, we construct a factor graph based non-linear optimization in the back-end. We carefully design a feedback mechanism to balance the front/back ends ensuring the estimation accuracy. We also propose a novel initialization method that accurately estimate the scale factor, the gravity, the velocity, and gyroscope and accelerometer biases in a very robust way. We evaluated the algorithm on a public dataset, when compared to other state-of-the-art monocular Visual-Inertial SLAM approaches, our algorithm achieves better accuracy and robustness in an efficient way. By the way, we also evaluate our algorithm in a MonocularInertial setup with a low cost IMU to achieve a robust and lowdrift realtime SLAM system.", "title": "" }, { "docid": "85ccad436c7e7eed128825e3946ae0ef", "text": "Recent research has made great strides in the field of detecting botnets. However, botnets of all kinds continue to plague the Internet, as many ISPs and organizations do not deploy these techniques. We aim to mitigate this state by creating a very low-cost method of detecting infected bot host. Our approach is to leverage the botnet detection work carried out by some organizations to easily locate collaborating bots elsewhere. We created BotMosaic as a countermeasure to IRC-based botnets. BotMosaic relies on captured bot instances controlled by a watermarker, who inserts a particular pattern into their network traffic. This pattern can then be detected at a very low cost by client organizations and the watermark can be tuned to provide acceptable false-positive rates. A novel feature of the watermark is that it is inserted collaboratively into the flows of multiple captured bots at once, in order to ensure the signal is strong enough to be detected. BotMosaic can also be used to detect stepping stones and to help trace back to the botmaster. It is content agnostic and can operate on encrypted traffic. We evaluate BotMosaic using simulations and a testbed deployment.", "title": "" }, { "docid": "6573629e918822c0928e8cf49f20752c", "text": "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https:// github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.", "title": "" }, { "docid": "ef9437b03a95fc2de438fe32bd2e32b9", "text": "and Creative Modeling Modeling is not simply a process of response mimicry as commonly believed. Modeled judgments and actions may differ in specific content but embody the same rule. For example, a model may deal with moral dilemmas that differ widely in the nature of the activity but apply the same moral standard to them. Modeled activities thus convey rules for generative and innovative behavior. This higher level learning is achieved through abstract modeling. Once observers extract the rules underlying the modeled activities they can generate new behaviors that go beyond what they have seen or heard. Creativeness rarely springs entirely from individual inventiveness. A lot of modeling goes on in creativity. By refining preexisting innovations, synthesizing them into new ways and adding novel elements to them something new is created. When exposed to models of differing styles of thinking and behaving, observers vary in what they adopt from the different sources and thereby create new blends of personal characteristics that differ from the individual models (Bandura, Ross & Ross, 1963). Modeling influences that exemplify new perspectives and innovative styles of thinking also foster creativity by weakening conventional mind sets (Belcher, 1975; Harris & Evans, 1973).", "title": "" }, { "docid": "b2aec3f88af47e47b4ca60493895cb8e", "text": "In this paper, a simple but efficient approach for blind image splicing detection is proposed. Image splicing is a common and fundamental operation used for image forgery. The detection of image splicing is a preliminary but desirable study for image forensics. Passive detection approaches of image splicing are usually regarded as pattern recognition problems based on features which are sensitive to splicing. In the proposed approach, we analyze the discontinuity of image pixel correlation and coherency caused by splicing in terms of image run-length representation and sharp image characteristics. The statistical features extracted from image run-length representation and image edge statistics are used for splicing detection. The support vector machine (SVM) is used as the classifier. Our experimental results demonstrate that the two proposed features outperform existing ones both in detection accuracy and computational complexity.", "title": "" }, { "docid": "525ddfaae4403392e8817986f2680a68", "text": "Documentation errors increase healthcare costs and cause unnecessary patient deaths. As the standard language for diagnoses and billing, ICD codes serve as the foundation for medical documentation worldwide. Despite the prevalence of electronic medical records, hospitals still witness high levels of ICD miscoding. In this paper, we propose to automatically document ICD codes with far-field speech recognition. Far-field speech occurs when the microphone is located several meters from the source, as is common with smart homes and security systems. Our method combines acoustic signal processing with recurrent neural networks to recognize and document ICD codes in real time. To evaluate our model, we collected a far-field speech dataset of ICD-10 codes and found our model to achieve 87% accuracy with a BLEU score of 85%. By sampling from an unsupervised medical language model, our method is able to outperform existing methods. Overall, this work shows the potential of automatic speech recognition to provide efficient, accurate, and cost-effective healthcare documentation.", "title": "" }, { "docid": "9c008dc2f3da4453317ce92666184da0", "text": "In embedded system design, there is an increasing demand for modeling techniques that can provide both accurate measurements of delay and fast simulation speed. Modeling latency effects of a cache can greatly increase accuracy of the simulation and assist developers to optimize their software. Current solutions have not succeeded in balancing three important factors: speed, accuracy and usability. In this research, we created a cache simulation module inside a well-known instruction set simulator QEMU. Our implementation can simulate various cases of cache configuration and obtain every memory access. In full system simulation, speed is kept at around 73 MIPS on a personal host computer which is close to native execution of ARM Cortex-M3(125 MIPS at 100 MHz). Compared to the widely used cache simulation tool, Valgrind, our simulator is three time faster.", "title": "" }, { "docid": "e3051e92e84c69f999c09fe751c936f0", "text": "Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be “compressed” to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size. Combined with off-the-shelf compression algorithms, the bound leads to state of the art generalization guarantees; in particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. As additional evidence connecting compression and generalization, we show that compressibility of models that tend to overfit is limited: We establish an absolute limit on expected compressibility as a function of expected generalization error, where the expectations are over the random choice of training examples. The bounds are complemented by empirical results that show an increase in overfitting implies an increase in the number of bits required to describe a trained network.", "title": "" }, { "docid": "19a538b6a49be54b153b0a41b6226d1f", "text": "This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes.", "title": "" } ]
scidocsrr
81d6eda2f2c652ad866ae891ba9cf8b9
Periodization paradigms in the 21st century: evidence-led or tradition-driven?
[ { "docid": "978749608ae97db4fd3d0e05f740c016", "text": "The theory of training was established about five decades ago when knowledge of athletes' preparation was far from complete and the biological background was based on a relatively small amount of objective research findings. At that time, traditional 'training periodization', a division of the entire seasonal programme into smaller periods and training units, was proposed and elucidated. Since then, international sport and sport science have experienced tremendous changes, while the traditional training periodization has remained at more or less the same level as the published studies of the initial publications. As one of the most practically oriented components of theory, training periodization is intended to offer coaches basic guidelines for structuring and planning training. However, during recent decades contradictions between the traditional model of periodization and the demands of high-performance sport practice have inevitably developed. The main limitations of traditional periodization stemmed from: (i) conflicting physiological responses produced by 'mixed' training directed at many athletic abilities; (ii) excessive fatigue elicited by prolonged periods of multi-targeted training; (iii) insufficient training stimulation induced by workloads of medium and low concentration typical of 'mixed' training; and (iv) the inability to provide multi-peak performances over the season. The attempts to overcome these limitations led to development of alternative periodization concepts. The recently developed block periodization model offers an alternative revamped approach for planning the training of high-performance athletes. Its general idea proposes the sequencing of specialized training cycles, i.e. blocks, which contain highly concentrated workloads directed to a minimal number of targeted abilities. Unlike the traditional model, in which the simultaneous development of many athletic abilities predominates, block-periodized training presupposes the consecutive development of reasonably selected target abilities. The content of block-periodized training is set down in its general principles, a taxonomy of mesocycle blocks, and guidelines for compiling an annual plan.", "title": "" } ]
[ { "docid": "47bfe9238083f0948c16d7beeac75155", "text": "In this paper, we propose a solution procedure for the Elementary Shortest Path Problem with Resource Constraints (ESPPRC). A relaxed version of this problem in which the path does not have to be elementary has been the backbone of a number of solution procedures based on column generation for several important problems, such as vehicle routing and crew-pairing. In many cases relaxing the restriction of an elementary path resulted in optimal solutions in a reasonable computation time. However, for a number of other problems, the elementary path restriction has too much impact on the solution to be relaxed or might even be necessary. We propose an exact solution procedure for the ESPPRC which extends the classical label correcting algorithm originally developed for the relaxed (non-elementary) path version of this problem. We present computational experiments of this algorithm for our specific problem and embedded in a column generation scheme for the classical Vehicle Routing Problem with Time Windows.", "title": "" }, { "docid": "614285482e8748e99fb061dd9e0f3887", "text": "A top-wall substrate integrated waveguide (SIW) slot radiator for generating circular polarized (CP) field is proposed and characterized in this letter. The reflection of the slot radiator is extremely weak, which simplifies the linear traveling wave array design. Based on such a structure, a 16-element CP SIW traveling wave antenna array is designed, fabricated, and measured at 16 GHz. A -23 dB side lobe level (SLL) with an axial ratio (AR) of 1.95 dB is experimentally achieved. The size of the proposed SIW CP linear array antenna is 285 mm times 22 mm. The measured gain is 18.9 dB, and the usable bandwidth is 2.5%.", "title": "" }, { "docid": "f88dfa78bc6e36691c4f74152946cb45", "text": "A new antenna, designed on a polyethylene terephthalate (PET) substrate and implemented by inkjet printing using a conductive ink, is proposed as a passive tag antenna for UHF radio frequency identification (RFID). The operating bandwidth of the proposed antenna is very large since it encompasses all worldwide UHF RFID bands and extends well beyond at both edges. Moreover, it has a very simple geometry, can be easily tuned to feed many of the commercial RFID chips, and is very robust with respect to realization tolerances. The antenna has been designed using a general-purpose 3-D computer-aided design (CAD), CST Microwave Studio, and measured results are in very good agreement with simulations. The proposed passive RFID tag meets both the objectives of low-cost and size reduction.", "title": "" }, { "docid": "512d418f33d864d0e48ce4b7ab52a8b9", "text": "(1) Background: Since early yield prediction is relevant for resource requirements of harvesting and marketing in the whole fruit industry, this paper presents a new approach of using image analysis and tree canopy features to predict early yield with artificial neural networks (ANN); (2) Methods: Two back propagation neural network (BPNN) models were developed for the early period after natural fruit drop in June and the ripening period, respectively. Within the same periods, images of apple cv. “Gala” trees were captured from an orchard near Bonn, Germany. Two sample sets were developed to train and test models; each set included 150 samples from the 2009 and 2010 growing season. For each sample (each canopy image), pixels were segmented into fruit, foliage, and background using image segmentation. The four features extracted from the data set for the canopy were: total cross-sectional area of fruits, fruit number, total cross-section area of small fruits, and cross-sectional area of foliage, and were used as inputs. With the actual weighted yield per tree as a target, BPNN was employed to learn their mutual relationship as a prerequisite to develop the prediction; (3) Results: For the developed BPNN model of the early period after June drop, correlation coefficients (R2) between the estimated and the actual weighted yield, mean forecast error (MFE), mean absolute percentage error (MAPE), and root mean square error (RMSE) were 0.81, −0.05, 10.7%, 2.34 kg/tree, respectively. For the model of the ripening period, these measures were 0.83, −0.03, 8.9%, 2.3 kg/tree, respectively. In 2011, the two previously developed models were used to predict apple yield. The RMSE and R2 values between the estimated and harvested apple yield were 2.6 kg/tree and 0.62 for the early period (small, green fruit) and improved near harvest (red, large fruit) to 2.5 kg/tree and 0.75 for a tree with ca. 18 kg yield per tree. For further method verification, the cv. “Pinova” apple trees were used as another variety in 2012 to develop the BPNN prediction model for the early period after June drop. The model was used in 2013, which gave similar results as those found with cv. “Gala”; (4) Conclusion: Overall, the results showed in this research that the proposed estimation models performed accurately using canopy and fruit features using image analysis algorithms.", "title": "" }, { "docid": "6e9ee317822ba925b9d3e823c717d08d", "text": "Agriculture is the major occupation in India and forms the backbone of Indian economy in which irrigation plays a crucial role for increasing the quality and quantity of crop yield. In spite of many revolutionary advancements in agriculture, there has not been a dramatic increase in agricultural performance. Lack of irrigation infrastructure and agricultural knowledge are the critical factors influencing agricultural performance. However, by using advanced agricultural equipment, the effect of these factors can be curtailed. The presented system aims at increasing the yield of crops by using an intelligent irrigation controller that makes use of wireless sensors. Sensors are used to monitor primary parameters such as soil moisture, soil pH, temperature and humidity. Irrigation decisions are taken based on the sensed data and the type of crop being grown. The system provides a mobile application in which farmers can remotely monitor and control the irrigation system. Also, the water pump is protected against damages due to voltage variations and dry running. Keywords—Android application, Bluetooth, humidity, irrigation, soil moisture, soil pH, temperature, wireless sensors.", "title": "" }, { "docid": "bbf9c2cfd22dc0caeac796c1f16261b8", "text": "Recent years have witnessed the emergence of Smart Environments technology for assisting people with their daily routines and for remote health monitoring. A lot of work has been done in the past few years on Activity Recognition and the technology is not just at the stage of experimentation in the labs, but is ready to be deployed on a larger scale. In this paper, we design a data-mining framework to extract the useful features from sensor data collected in the smart home environment and select the most important features based on two different feature selection criterions, then utilize several machine learning techniques to recognize the activities. To validate these algorithms, we use real sensor data collected from volunteers living in our smart apartment test bed. We compare the performance between alternative learning algorithms and analyze the prediction results of two different group experiments performed in the smart home.", "title": "" }, { "docid": "d339ef4e124fdc9d64330544b7391055", "text": "Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Part I of this series presented a neurophysiologic theory of the effects of Sudarshan Kriya Yoga (SKY). Part II will review clinical studies, our own clinical observations, and guidelines for the safe and effective use of yoga breath techniques in a wide range of clinical conditions. Although more clinical studies are needed to document the benefits of programs that combine pranayama (yogic breathing) asanas (yoga postures), and meditation, there is sufficient evidence to consider Sudarshan Kriya Yoga to be a beneficial, low-risk, low-cost adjunct to the treatment of stress, anxiety, post-traumatic stress disorder (PTSD), depression, stress-related medical illnesses, substance abuse, and rehabilitation of criminal offenders. SKY has been used as a public health intervention to alleviate PTSD in survivors of mass disasters. Yoga techniques enhance well-being, mood, attention, mental focus, and stress tolerance. Proper training by a skilled teacher and a 30-minute practice every day will maximize the benefits. Health care providers play a crucial role in encouraging patients to maintain their yoga practices.", "title": "" }, { "docid": "39bf990d140eb98fa7597de1b6165d49", "text": "The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities.", "title": "" }, { "docid": "c8ca57db545f2d1f70f3640651bb3e79", "text": "sprightly style and is interesting from cover to cover. The comments, critiques, and summaries that accompany the chapters are very helpful in crystalizing the ideas and answering questions that may arise, particularly to the self-learner. The transparency in the presentation of the material in the book equips the reader to proceed quickly to a wealth of problems included at the end of each chapter. These problems ranging from elementary to research-level are very valuable in that a solid working knowledge of the invariant imbedding techniques is acquired as well as good insight in attacking problems in various applied areas. Furthermore, a useful selection of references is given at the end of each chapter. This book may not appeal to those mathematicians who are interested primarily in the sophistication of mathematical theory, because the authors have deliberately avoided all pseudo-sophistication in attaining transparency of exposition. Precisely for the same reason the majority of the intended readers who are applications-oriented and are eager to use the techniques quickly in their own fields will welcome and appreciate the efforts put into writing this book. From a purely mathematical point of view, some of the invariant imbedding results may be considered to be generalizations of the classical theory of first-order partial differential equations, and a part of the analysis of invariant imbedding is still at a somewhat heuristic stage despite successes in many computational applications. However, those who are concerned with mathematical rigor will find opportunities to explore the foundations of the invariant imbedding method. In conclusion, let me quote the following: \"What is the best method to obtain the solution to a problem'? The answer is, any way that works.\" (Richard P. Feyman, Engineering and Science, March 1965, Vol. XXVIII, no. 6, p. 9.) In this well-written book, Bellman and Wing have indeed accomplished the task of introducing the simplicity of the invariant imbedding method to tackle various problems of interest to engineers, physicists, applied mathematicians, and numerical analysts.", "title": "" }, { "docid": "3767702e22ac34493bb1c6c2513da9f7", "text": "The majority of the online reviews are written in free-text format. It is often useful to have a measure which summarizes the content of the review. One such measure can be sentiment which expresses the polarity (positive/negative) of the review. However, a more granular classification of sentiment, such as rating stars, would be more advantageous and would help the user form a better opinion. In this project, we propose an approach which involves a combination of topic modeling and sentiment analysis to achieve this objective and thereby help predict the rating stars.", "title": "" }, { "docid": "718cf9a405a81b9a43279a1d02f5e516", "text": "In cross-cultural psychology, one of the major sources of the development and display of human behavior is the contact between cultural populations. Such intercultural contact results in both cultural and psychological changes. At the cultural level, collective activities and social institutions become altered, and at the psychological level, there are changes in an individual's daily behavioral repertoire and sometimes in experienced stress. The two most common research findings at the individual level are that there are large variations in how people acculturate and in how well they adapt to this process. Variations in ways of acculturating have become known by the terms integration, assimilation, separation, and marginalization. Two variations in adaptation have been identified, involving psychological well-being and sociocultural competence. One important finding is that there are relationships between how individuals acculturate and how well they adapt: Often those who integrate (defined as being engaged in both their heritage culture and in the larger society) are better adapted than those who acculturate by orienting themselves to one or the other culture (by way of assimilation or separation) or to neither culture (marginalization). Implications of these findings for policy and program development and for future research are presented.", "title": "" }, { "docid": "8cfcadd2216072dbeb5c7f5d99326c49", "text": "In this paper, a human eye localization algorithm in images and video is presented for faces with frontal pose and upright orientation. A given face region is filtered by a high-pass filter of a wavelet transform. In this way, edges of the region are highlighted, and a caricature-like representation is obtained. After analyzing horizontal projections and profiles of edge regions in the high-pass filtered image, the candidate points for each eye are detected. All the candidate points are then classified using a support vector machine based classifier. Locations of each eye are estimated according to the most probable ones among the candidate points. It is experimentally observed that our eye localization method provides promising results for both image and video processing applications.", "title": "" }, { "docid": "151b3f80fe443b8f9b5f17c0531e0679", "text": "Pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer’s disease have been the subject of extensive research in recent years. In this paper, we use deep learning methods, and in particular sparse autoencoders and 3D convolutional neural networks, to build an algorithm that can predict the disease status of a patient, based on an MRI scan of the brain. We report on experiments using the ADNI data set involving 2,265 historical scans. We demonstrate that 3D convolutional neural networks outperform several other classifiers reported in the literature and produce state-of-art results.", "title": "" }, { "docid": "5232ea4de509766a4fcf0e195f05d81b", "text": "This paper provides new results for control of complex flight maneuvers for a quadrotor unmanned aerial vehicle (UAV). The flight maneuvers are defined by a concatenation of flight modes or primitives, each of which is achieved by a nonlinear controller that solves an output tracking problem. A mathematical model of the quadrotor UAV rigid body dynamics, defined on the configuration space SE(3), is introduced as a basis for the analysis. The quadrotor UAV has four input degrees of freedom, namely the magnitudes of the four rotor thrusts; each flight mode is defined by solving an asymptotic optimal tracking problem. Although many flight modes can be studied, we focus on three output tracking problems, namely (1) outputs given by the vehicle attitude, (2) outputs given by the three position variables for the vehicle center of mass, and (3) output given by the three velocity variables for the vehicle center of mass. A nonlinear tracking controller is developed on the special Euclidean group SE(3) for each flight mode, and the closed loop is shown to have desirable properties that are almost global in each case. Several numerical examples, including one example in which the quadrotor recovers from being initially upside down and another example that includes switching and transitions between different flight modes, illustrate the versatility and generality of the proposed approach.", "title": "" }, { "docid": "89271c3d5497ea7d7f84b86d67baeb15", "text": "Three studies are presented which provide a mixed methods exploration of fingerprint analysis. Using a qualitative approach (Expt 1), expert analysts used a 'think aloud' task to describe their process of analysis. Thematic analysis indicated consistency of practice, and experts' comments underpinned the development of a training tool for subsequent use. Following this, a quantitative approach (Expt 2) assessed expert reliability on a fingerprint matching task. The results suggested that performance was high and often at ceiling, regardless of the length of experience held by the expert. As a final test, the experts' fingerprint analysis method was taught to a set of naïve students, and their performance on the fingerprint matching task was compared both to the expert group and to an untrained novice group (Expt 3). Results confirmed that the trained students performed significantly better than the untrained students. However, performance remained substantially below that of the experts. Several explanations are explored to account for the performance gap between experts and trained novices, and their implications are discussed in terms of the future of fingerprint evidence in court.", "title": "" }, { "docid": "584645a035454682222a26870377703c", "text": "Conventionally, the sum and difference signals of a tracking system are fixed up by sum and difference network and the network is often composed of four or more magic tees whose arms direct at four different directions, which give inconveniences to assemble. In this paper, a waveguide side-wall slot directional coupler and a double dielectric slab filled waveguide phase shifter is used to form a planar magic tee with four arms in the same H-plane. Four planar magic tees can be used to construct the W-band planar monopulse comparator. The planar magic tee is analyzed exactly with Ansoft HFSS software, and is optimized by genetic algorithm. Simulation results are presented, which show good performance.", "title": "" }, { "docid": "fcdde2f5b55b6d8133e6dea63d61b2c8", "text": "It has been observed by many people that a striking number of quite diverse mathematical problems can be formulated as problems in integer programming, that is, linear programming problems in which some or all of the variables are required to assume integral values. This fact is rendered quite interesting by recent research on such problems, notably by R. E. Gomory [2, 3], which gives promise of yielding efficient computational techniques for their solution. The present paper provides yet another example of the versatility of integer programming as a mathematical modeling device by representing a generalization of the well-known “Travelling Salesman Problem” in integer programming terms. The authors have developed several such models, of which the one presented here is the most efficient in terms of generality, number of variables, and number of constraints. This model is due to the second author [4] and was presented briefly at the Symposium on Combinatorial Problems held at Princeton University, April 1960, sponsored by SIAM and IBM. The problem treated is: (1) A salesman is required to visit each of <italic>n</italic> cities, indexed by 1, ··· , <italic>n</italic>. He leaves from a “base city” indexed by 0, visits each of the <italic>n</italic> other cities exactly once, and returns to city 0. During his travels he must return to 0 exactly <italic>t</italic> times, including his final return (here <italic>t</italic> may be allowed to vary), and he must visit no more than <italic>p</italic> cities in one tour. (By a tour we mean a succession of visits to cities without stopping at city 0.) It is required to find such an itinerary which minimizes the total distance traveled by the salesman.\n Note that if <italic>t</italic> is fixed, then for the problem to have a solution we must have <italic>tp</italic> ≧ <italic>n</italic>. For <italic>t</italic> = 1, <italic>p</italic> ≧ <italic>n</italic>, we have the standard traveling salesman problem.\nLet <italic>d<subscrpt>ij</subscrpt></italic> (<italic>i</italic> ≠ <italic>j</italic> = 0, 1, ··· , <italic>n</italic>) be the distance covered in traveling from city <italic>i</italic> to city <italic>j</italic>. The following integer programming problem will be shown to be equivalent to (1): (2) Minimize the linear form ∑<subscrpt>0≦<italic>i</italic>≠<italic>j</italic>≦<italic>n</italic></subscrpt>∑ <italic>d<subscrpt>ij</subscrpt>x<subscrpt>ij</subscrpt></italic> over the set determined by the relations ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=0<italic>i</italic>≠<italic>j</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>j</italic> = 1, ··· , <italic>n</italic>) ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>j</italic>=0<italic>j</italic>≠<italic>i</italic></subscrpt> <italic>x<subscrpt>ij</subscrpt></italic> = 1 (<italic>i</italic> = 1, ··· , <italic>n</italic>) <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> + <italic>px<subscrpt>ij</subscrpt></italic> ≦ <italic>p</italic> - 1 (1 ≦ <italic>i</italic> ≠ <italic>j</italic> ≦ <italic>n</italic>) where the <italic>x<subscrpt>ij</subscrpt></italic> are non-negative integers and the <italic>u<subscrpt>i</subscrpt></italic> (<italic>i</italic> = 1, …, <italic>n</italic>) are arbitrary real numbers. (We shall see that it is permissible to restrict the <italic>u<subscrpt>i</subscrpt></italic> to be non-negative integers as well.)\n If <italic>t</italic> is fixed it is necessary to add the additional relation: ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>u</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> = <italic>t</italic> Note that the constraints require that <italic>x<subscrpt>ij</subscrpt></italic> = 0 or 1, so that a natural correspondence between these two problems exists if the <italic>x<subscrpt>ij</subscrpt></italic> are interpreted as follows: The salesman proceeds from city <italic>i</italic> to city <italic>j</italic> if and only if <italic>x<subscrpt>ij</subscrpt></italic> = 1. Under this correspondence the form to be minimized in (2) is the total distance to be traveled by the salesman in (1), so the burden of proof is to show that the two feasible sets correspond; i.e., a feasible solution to (2) has <italic>x<subscrpt>ij</subscrpt></italic> which do define a legitimate itinerary in (1), and, conversely a legitimate itinerary in (1) defines <italic>x<subscrpt>ij</subscrpt></italic>, which, together with appropriate <italic>u<subscrpt>i</subscrpt></italic>, satisfy the constraints of (2).\nConsider a feasible solution to (2).\n The number of returns to city 0 is given by ∑<supscrpt><italic>n</italic></supscrpt><subscrpt><italic>i</italic>=1</subscrpt> <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt>. The constraints of the form ∑ <italic>x<subscrpt>ij</subscrpt></italic> = 1, all <italic>x<subscrpt>ij</subscrpt></italic> non-negative integers, represent the conditions that each city (other than zero) is visited exactly once. The <italic>u<subscrpt>i</subscrpt></italic> play a role similar to node potentials in a network and the inequalities involving them serve to eliminate tours that do not begin and end at city 0 and tours that visit more than <italic>p</italic> cities. Consider any <italic>x</italic><subscrpt><italic>r</italic><subscrpt>0</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> = 1 (<italic>r</italic><subscrpt>1</subscrpt> ≠ 0). There exists a unique <italic>r</italic><subscrpt>2</subscrpt> such that <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> = 1. Unless <italic>r</italic><subscrpt>2</subscrpt> = 0, there is a unique <italic>r</italic><subscrpt>3</subscrpt> with <italic>x</italic><subscrpt><italic>r</italic><subscrpt>2</subscrpt><italic>r</italic><subscrpt>3</subscrpt></subscrpt> = 1. We proceed in this fashion until some <italic>r<subscrpt>j</subscrpt></italic> = 0. This must happen since the alternative is that at some point we reach an <italic>r<subscrpt>k</subscrpt></italic> = <italic>r<subscrpt>j</subscrpt></italic>, <italic>j</italic> + 1 < <italic>k</italic>. \n Since none of the <italic>r</italic>'s are zero we have <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r<subscrpt>i</subscrpt></italic><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u<subscrpt>r<subscrpt>i</subscrpt></subscrpt></italic> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>i</italic> + 1</subscrpt></subscrpt> ≦ - 1. Summing from <italic>i</italic> = <italic>j</italic> to <italic>k</italic> - 1, we have <italic>u<subscrpt>r<subscrpt>j</subscrpt></subscrpt></italic> - <italic>u<subscrpt>r<subscrpt>k</subscrpt></subscrpt></italic> = 0 ≦ <italic>j</italic> + 1 - <italic>k</italic>, which is a contradiction. Thus all tours include city 0. It remains to observe that no tours is of length greater than <italic>p</italic>. Suppose such a tour exists, <italic>x</italic><subscrpt>0<italic>r</italic><subscrpt>1</subscrpt></subscrpt> , <italic>x</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt><italic>r</italic><subscrpt>2</subscrpt></subscrpt> , ·&middot·· , <italic>x</italic><subscrpt><italic>r<subscrpt>p</subscrpt>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> = 1 with all <italic>r<subscrpt>i</subscrpt></italic> ≠ 0. Then, as before, <italic>u</italic><subscrpt><italic>r</italic>1</subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> ≦ - <italic>p</italic> or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≧ <italic>p</italic>.\n But we have <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> + <italic>px</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> - 1 or <italic>u</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt></subscrpt> - <italic>u</italic><subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt> ≦ <italic>p</italic> (1 - <italic>x</italic><subscrpt><italic>r</italic><subscrpt><italic>p</italic>+1</subscrpt><italic>r</italic><subscrpt>1</subscrpt></subscrpt>) - 1 ≦ <italic>p</italic> - 1, which is a contradiction.\nConversely, if the <italic>x<subscrpt>ij</subscrpt></italic> correspond to a legitimate itinerary, it is clear that the <italic>u<subscrpt>i</subscrpt></italic> can be adjusted so that <italic>u<subscrpt>i</subscrpt></italic> = <italic>j</italic> if city <italic>i</italic> is the <italic>j</italic>th city visited in the tour which includes city <italic>i</italic>, for we then have <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> = - 1 if <italic>x<subscrpt>ij</subscrpt></italic> = 1, and always <italic>u<subscrpt>i</subscrpt></italic> - <italic>u<subscrpt>j</subscrpt></italic> ≦ <italic>p</italic> - 1.\n The above integer program involves <italic>n</italic><supscrpt>2</supscrpt> + <italic>n</italic> constraints (if <italic>t</italic> is not fixed) in <italic>n</italic><supscrpt>2</supscrpt> + 2<italic>n</italic> variables. Since the inequality form of constraint is fundamental for integer programming calculations, one may eliminate 2<italic>n</italic> variables, say the <italic>x</italic><subscrpt><italic>i</italic>0</subscrpt> and <italic>x</italic><subscrpt>0<italic>j</italic></subscrpt>, by means of the equation constraints and produce", "title": "" }, { "docid": "502cae1daa2459ed0f826ed3e20c44e4", "text": "Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless nodes and random connectivity. In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs. The main contribution of this paper is to provide general results characterizing the STM capacity for linear ESNs with multidimensional input streams when the inputs have common low-dimensional structure: sparsity in a basis or significant statistical dependence between inputs. In both cases, we show that the number of nodes in the network must scale linearly with the information rate and poly-logarithmically with the input dimension. The analysis relies on advanced applications of random matrix theory and results in explicit non-asymptotic bounds on the recovery error. Taken together, this analysis provides a significant step forward in our understanding of the STM properties in RNNs.", "title": "" }, { "docid": "585ed9a4a1c903c836ee7d6b5677e042", "text": "Several factors contribute to on-going challenges of spatial planning and urban policy in megacities, including rapid population shifts, less organized urban areas, and a lack of data with which to monitor urban growth and land use change. To support Mumbai’s sustainable development, this research was conducted to examine past urban land use changes on the basis of remote sensing data collected between 1973 and 2010. An integrated Markov ChainseCellular Automata (MCeCA) urban growth model was implemented to predict the city’s expansion for the years 2020e2030. To consider the factors affecting urban growth, the MCeCA model was also connected to multi-criteria evaluation to generate transition probability maps. The results of the multi-temporal change detection show that the highest urban growth rates, 142% occurred between 1973 and 1990. In contrast, the growth rates decreased to 40% between 1990 and 2001 and decreased to 38% between 2001 and 2010. The areas most affected by this degradation were open land and croplands. The MCeCA model predicts that this trend will continue in the future. Compared to the reference year, 2010, increases in built-up areas of 26% by 2020 and 12% by 2030 are forecast. Strong evidence is provided for complex future urban growth, characterized by a mixture of growth patterns. The most pronounced of these is urban expansion toward the north along the main traffic infrastructure, linking the two currently non-affiliated main settlement ribbons. Additionally, urban infill developments are expected to emerge in the eastern areas, and these developments are expected to increase urban pressure. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1c269ac67fb954da107229fe4e18dcc8", "text": "The number of output-voltage levels available in pulsewidth-modulated (PWM) voltage-source inverters can be increased by inserting a split-wound coupled inductor between the upper and lower switches in each inverter leg. Interleaved PWM control of both inverter-leg switches produces three-level PWM voltage waveforms at the center tap of the coupled inductor winding, representing the inverter-leg output terminal, with a PWM frequency twice the switching frequency. The winding leakage inductance is in series with the output terminal, with the main magnetizing inductance filtering the instantaneous PWM-cycle voltage differences between the upper and lower switches. Since PWM dead-time signal delays can be removed, higher device switching frequencies and higher fundamental output voltages are made possible. The proposed inverter topologies produce five-level PWM voltage waveforms between two inverter-leg terminals with a PWM frequency up to four times higher than the inverter switching frequency. This is achieved with half the number of switches used in alternative schemes. This paper uses simulated and experimental results to illustrate the operation of the proposed inverter structures.", "title": "" } ]
scidocsrr
90de74b88910549d837e827ce6061567
ALL OUR SONS: THE DEVELOPMENTAL NEUROBIOLOGY AND NEUROENDOCRINOLOGY OF BOYS AT RISK.
[ { "docid": "7340866fa3965558e1571bcc5294b896", "text": "The human stress response has been characterized, both physiologically and behaviorally, as \"fight-or-flight.\" Although fight-or-flight may characterize the primary physiological responses to stress for both males and females, we propose that, behaviorally, females' responses are more marked by a pattern of \"tend-and-befriend.\" Tending involves nurturant activities designed to protect the self and offspring that promote safety and reduce distress; befriending is the creation and maintenance of social networks that may aid in this process. The biobehavioral mechanism that underlies the tend-and-befriend pattern appears to draw on the attachment-caregiving system, and neuroendocrine evidence from animal and human studies suggests that oxytocin, in conjunction with female reproductive hormones and endogenous opioid peptide mechanisms, may be at its core. This previously unexplored stress regulatory system has manifold implications for the study of stress.", "title": "" } ]
[ { "docid": "e189f36ba0fcb91d0608d0651c60516e", "text": "In this paper, we describe the progressive design of the gesture recognition module of an automated food journaling system -- Annapurna. Annapurna runs on a smartwatch and utilises data from the inertial sensors to first identify eating gestures, and then captures food images which are presented to the user in the form of a food journal. We detail the lessons we learnt from multiple in-the-wild studies, and show how eating recognizer is refined to tackle challenges such as (i) high gestural diversity, and (ii) non-eating activities with similar gestural signatures. Annapurna is finally robust (identifying eating across a wide diversity in food content, eating styles and environments) and accurate (false-positive and false-negative rates of 6.5% and 3.3% respectively)", "title": "" }, { "docid": "b99c42f412408610e1bfd414f4ea6b9f", "text": "ADPfusion combines the usual high-level, terse notation of Haskell with an underlying fusion framework. The result is a parsing library that allows the user to write algorithms in a style very close to the notation used in formal languages and reap the performance benefits of automatic program fusion. Recent developments in natural language processing and computational biology have lead to a number of works that implement algorithms that process more than one input at the same time. We provide an extension of ADPfusion that works on extended index spaces and multiple input sequences, thereby increasing the number of algorithms that are amenable to implementation in our framework. This allows us to implement even complex algorithms with a minimum of overhead, while enjoying all the guarantees that algebraic dynamic programming provides to the user.", "title": "" }, { "docid": "d81282c41c609b980442f481d0a7fa3d", "text": "Some of the recent applications in the field of the power supplies use multiphase converters to achieve fast dynamic response, smaller input/output filters, or better packaging. Typically, these converters have several paralleled power stages, with a current loop in each phase and a single voltage loop. The presence of the current loops avoids current imbalance among phases. The purpose of this paper is to demonstrate that, in CCM, with a proper design, there is an intrinsic mechanism of self-balance that reduces the current imbalance. Thus, in the buck converter, if natural zero-voltage switching (ZVS) is achieved in both transitions, the instantaneous inductor current compensates partially the different DC currents through the phases. The need for using n current loops will be finally determined by the application but not by the converter itself. Using the buck converter as a base, a multiphase converter has been developed. Several tests have been carried out in the laboratory and the results show clearly that, when the conditions are met, the phase currents are very well balanced even during transient conditions.", "title": "" }, { "docid": "f752d156cc1c606e5b06cf99a90b2a49", "text": "We study the relationship between Facebook popularity (number of contacts) and personality traits on a large number of subjects. We test to which extent two prevalent viewpoints hold. That is, popular users (those with many social contacts) are the ones whose personality traits either predict many offline (real world) friends or predict propensity to maintain superficial relationships. We find that the predictor for number of friends in the real world (Extraversion) is also a predictor for number of Facebook contacts. We then test whether people who have many social contacts on Facebook are the ones who are able to adapt themselves to new forms of communication, present themselves in likable ways, and have propensity to maintain superficial relationships. We show that there is no statistical evidence to support such a conjecture.", "title": "" }, { "docid": "1158e01718dd8eed415dd5b3513f4e30", "text": "Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multi-label system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system. The experiments show that our M-Net system achieves state-of-the-art OD and OC segmentation result on ORIGA data set. Simultaneously, the proposed method also obtains the satisfactory glaucoma screening performances with calculated CDR value on both ORIGA and SCES datasets.", "title": "" }, { "docid": "993590032de592f4bb69d9c906ff76a8", "text": "The evolution toward 5G mobile networks will be characterized by an increasing number of wireless devices, increasing device and service complexity, and the requirement to access mobile services ubiquitously. Two key enablers will allow the realization of the vision of 5G: very dense deployments and centralized processing. This article discusses the challenges and requirements in the design of 5G mobile networks based on these two key enablers. It discusses how cloud technologies and flexible functionality assignment in radio access networks enable network densification and centralized operation of the radio access network over heterogeneous backhaul networks. The article describes the fundamental concepts, shows how to evolve the 3GPP LTE a", "title": "" }, { "docid": "4a69a0c5c225d9fbb40373aebaeb99be", "text": "The hyperlink structure of Wikipedia constitutes a key resource for many Natural Language Processing tasks and applications, as it provides several million semantic annotations of entities in context. Yet only a small fraction of mentions across the entire Wikipedia corpus is linked. In this paper we present the automatic construction and evaluation of a Semantically Enriched Wikipedia (SEW) in which the overall number of linked mentions has been more than tripled solely by exploiting the structure of Wikipedia itself and the wide-coverage sense inventory of BabelNet. As a result we obtain a sense-annotated corpus with more than 200 million annotations of over 4 million different concepts and named entities. We then show that our corpus leads to competitive results on multiple tasks, such as Entity Linking and Word Similarity.", "title": "" }, { "docid": "90e5eaa383c00a0551a5161f07c683e7", "text": "The importance of the Translation Lookaside Buffer (TLB) on system performance is well known. There have been numerous prior efforts addressing TLB design issues for cutting down access times and lowering miss rates. However, it was only recently that the first exploration [26] on prefetching TLB entries ahead of their need was undertaken and a mechanism called Recency Prefetching was proposed. There is a large body of literature on prefetching for caches, and it is not clear how they can be adapted (or if the issues are different) for TLBs, how well suited they are for TLB prefetching, and how they compare with the recency prefetching mechanism.This paper presents the first detailed comparison of different prefetching mechanisms (previously proposed for caches) - arbitrary stride prefetching, and markov prefetching - for TLB entries, and evaluates their pros and cons. In addition, this paper proposes a novel prefetching mechanism, called Distance Prefetching, that attempts to capture patterns in the reference behavior in a smaller space than earlier proposals. Using detailed simulations of a wide variety of applications (56 in all) from different benchmark suites and all the SPEC CPU2000 applications, this paper demonstrates the benefits of distance prefetching.", "title": "" }, { "docid": "022a2f42669fdb337cfb4646fed9eb09", "text": "A mobile agent with the task to classify its sensor pattern has to cope with ambiguous information. Active recognition of three-dimensional objects involves the observer in a search for discriminative evidence, e.g., by change of its viewpoint. This paper defines the recognition process as a sequential decision problem with the objective to disambiguate initial object hypotheses. Reinforcement learning provides then an efficient method to autonomously develop near-optimal decision strategies in terms of sensorimotor mappings. The proposed system learns object models from visual appearance and uses a radial basis function (RBF) network for a probabilistic interpretation of the two-dimensional views. The information gain in fusing successive object hypotheses provides a utility measure to reinforce actions leading to discriminative viewpoints. The system is verified in experiments with 16 objects and two degrees of freedom in sensor motion. Crucial improvements in performance are gained using the learned in contrast to random camera placements. © 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "1d8f11b742dd810f228b80747ec2a0bd", "text": "The particle swarm optimization algorithm was showed to converge rapidly during the initial stages of a global search, but around global optimum, the search process will become very slow. On the contrary, the gradient descending method can achieve faster convergent speed around global optimum, and at the same time, the convergent accuracy can be higher. So in this paper, a hybrid algorithm combining particle swarm optimization (PSO) algorithm with back-propagation (BP) algorithm, also referred to as PSO–BP algorithm, is proposed to train the weights of feedforward neural network (FNN), the hybrid algorithm can make use of not only strong global searching ability of the PSOA, but also strong local searching ability of the BP algorithm. In this paper, a novel selection strategy of the inertial weight is introduced to the PSO algorithm. In the proposed PSO–BP algorithm, we adopt a heuristic way to give a transition from particle swarm search to gradient descending search. In this paper, we also give three kind of encoding strategy of particles, and give the different problem area in which every encoding strategy is used. The experimental results show that the proposed hybrid PSO–BP algorithm is better than the Adaptive Particle swarm optimization algorithm (APSOA) and BP algorithm in convergent speed and convergent accuracy. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "a75e29521b04d5e09228918e4ed560a6", "text": "This study assessed motives for social network site (SNS) use, group belonging, collective self-esteem, and gender effects among older adolescents. Communication with peer group members was the most important motivation for SNS use. Participants high in positive collective self-esteem were strongly motivated to communicate with peer group via SNS. Females were more likely to report high positive collective self-esteem, greater overall use, and SNS use to communicate with peers. Females also posted higher means for group-in-self, passing time, and entertainment. Negative collective self-esteem correlated with social compensation, suggesting that those who felt negatively about their social group used SNS as an alternative to communicating with other group members. Males were more likely than females to report negative collective self-esteem and SNS use for social compensation and social identity gratifications.", "title": "" }, { "docid": "88c592bdd7bb9c9348545734a9508b7b", "text": "environments: An introduction C.-S. Li B. L. Brech S. Crowder D. M. Dias H. Franke M. Hogstrom D. Lindquist G. Pacifici S. Pappe B. Rajaraman J. Rao R. P. Ratnaparkhi R. A. Smith M. D. Williams During the past few years, enterprises have been increasingly aggressive in moving mission-critical and performance-sensitive applications to the cloud, while at the same time many new mobile, social, and analytics applications are directly developed and operated on cloud computing platforms. These two movements are encouraging the shift of the value proposition of cloud computing from cost reduction to simultaneous agility and optimization. These requirements (agility and optimization) are driving the recent disruptive trend of software defined computing, for which the entire computing infrastructureVcompute, storage and networkVis becoming software defined and dynamically programmable. The key elements within software defined environments include capability-based resource abstraction, goal-based and policy-based workload definition, and outcome-based continuous mapping of the workload to the available resources. Furthermore, software defined environments provide the tooling and capabilities to compose workloads from existing components that are then continuously and autonomously mapped onto the underlying programmable infrastructure. These elements enable software defined environments to achieve agility, efficiency, and continuous outcome-optimized provisioning and management, plus continuous assurance for resiliency and security. This paper provides an overview and introduction to the key elements and challenges of software defined environments.", "title": "" }, { "docid": "6c7284ca77809210601c213ee8a685bb", "text": "Patients with non-small cell lung cancer (NSCLC) require careful staging at the time of diagnosis to determine prognosis and guide treatment recommendations. The seventh edition of the TNM Classification of Malignant Tumors is scheduled to be published in 2009 and the International Association for the Study of Lung Cancer (IASLC) created the Lung Cancer Staging Project (LCSP) to guide revisions to the current lung cancer staging system. These recommendations will be submitted to the American Joint Committee on Cancer (AJCC) and to the Union Internationale Contre le Cancer (UICC) for consideration in the upcoming edition of the staging manual. Data from over 100,000 patients with lung cancer were submitted for analysis and several modifications were suggested for the T descriptors and the M descriptors although the current N descriptors remain unchanged. These recommendations will further define homogeneous patient subsets with similar survival rates. More importantly, these revisions will help guide clinicians in making optimal, stage-specific, treatment recommendations.", "title": "" }, { "docid": "86dfbb8dc8682f975ccb3cfce75eac3a", "text": "BACKGROUND\nAlthough many precautions have been introduced into early burn management, post burn contractures are still significant problems in burn patients. In this study, a form of Z-plasty in combination with relaxing incision was used for the correction of contractures.\n\n\nMETHODS\nPreoperatively, a Z-advancement rotation flap combined with a relaxing incision was drawn on the contracture line. Relaxing incision created a skin defect like a rhomboid. Afterwards, both limbs of the Z flap were incised. After preparation of the flaps, advancement and rotation were made in order to cover the rhomboid defect. Besides subcutaneous tissue, skin edges were closely approximated with sutures.\n\n\nRESULTS\nThis study included sixteen patients treated successfully with this flap. It was used without encountering any major complications such as infection, hematoma, flap loss, suture dehiscence or flap necrosis. All rotated and advanced flaps healed uneventfully. In all but one patient, effective contracture release was achieved by means of using one or two Z-plasty. In one patient suffering severe left upper extremity contracture, a little residual contracture remained due to inadequate release.\n\n\nCONCLUSION\nWhen dealing with this type of Z-plasty for mild contractures, it offers a new option for the correction of post burn contractures, which is safe, simple and effective.", "title": "" }, { "docid": "8760b523ca90dccf7a9a197622bda043", "text": "The increasing need for better performance, protection, and reliability in shipboard power distribution systems, and the increasing availability of power semiconductors is generating the potential for solid state circuit breakers to replace traditional electromechanical circuit breakers. This paper reviews various solid state circuit breaker topologies that are suitable for low and medium voltage shipboard system protection. Depending on the application solid state circuit breakers can have different main circuit topologies, fault detection methods, commutation methods of power semiconductor devices, and steady state operation after tripping. This paper provides recommendations on the solid state circuit breaker topologies that provides the best performance-cost tradeoff based on the application.", "title": "" }, { "docid": "54fc5bc85ef8022d099fff14ab1b7ce0", "text": "Automatic inspection of Mura defects is a challenging task in thin-film transistor liquid crystal display (TFT-LCD) defect detection, which is critical for LCD manufacturers to guarantee high standard quality control. In this paper, we propose a set of automatic procedures to detect mura defects by using image processing and computer vision techniques. Singular Value Decomposition (SVD) and Discrete Cosine Transformation(DCT) techniques are employed to conduct image reconstruction, based on which we are able to obtain the differential image of LCD Cells. In order to detect different types of mura defects accurately, we then design a method that employs different detection modules adaptively, which can overcome the disadvantage of simply using a single threshold value. Finally, we provide the experimental results to validate the effectiveness of the proposed method in mura detection.", "title": "" }, { "docid": "ddc6a5e9f684fd13aec56dc48969abc2", "text": "During debugging, a developer must repeatedly and manually reproduce faulty behavior in order to inspect different facets of the program's execution. Existing tools for reproducing such behaviors prevent the use of debugging aids such as breakpoints and logging, and are not designed for interactive, random-access exploration of recorded behavior. This paper presents Timelapse, a tool for quickly recording, reproducing, and debugging interactive behaviors in web applications. Developers can use Timelapse to browse, visualize, and seek within recorded program executions while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use Timelapse to demonstrate failures in situ and share recorded behaviors with developers, improving bug report quality by obviating the need for detailed reproduction steps. Timelapse is built on Dolos, a novel record/replay infrastructure that ensures deterministic execution by capturing and reusing program inputs both from the user and from external sources such as the network. Dolos introduces negligible overhead and does not interfere with breakpoints and logging. In a small user evaluation, participants used Timelapse to accelerate existing reproduction activities, but were not significantly faster or more successful in completing the larger tasks at hand. Together, the Dolos infrastructure and Timelapse developer tool support systematic bug reporting and debugging practices.", "title": "" }, { "docid": "6ff034e2ff0d54f7e73d23207789898d", "text": "This letter presents two high-gain, multidirector Yagi-Uda antennas for use within the 24.5-GHz ISM band, realized through a multilayer, purely additive inkjet printing fabrication process on a flexible substrate. Multilayer material deposition is used to realize these 3-D antenna structures, including a fully printed 120- μm-thick dielectric substrate for microstrip-to-slotline feeding conversion. The antennas are fabricated, measured, and compared to simulated results showing good agreement and highlighting the reliable predictability of the printing process. An endfire realized gain of 8 dBi is achieved within the 24.5-GHz ISM band, presenting the highest-gain inkjet-printed antenna at this end of the millimeter-wave regime. The results of this work further demonstrate the feasibility of utilizing inkjet printing for low-cost, vertically integrated antenna structures for on-chip and on-package integration throughout the emerging field of high-frequency wireless electronics.", "title": "" }, { "docid": "dade322206eeab84bfdae7d45fe043ca", "text": "Lung cancer has the highest death rate among all cancers in the USA. In this work we focus on improving the ability of computer-aided diagnosis (CAD) systems to predict the malignancy of nodules from cropped CT images of lung nodules. We evaluate the effectiveness of very deep convolutional neural networks at the task of expert-level lung nodule malignancy classification. Using the state-of-the-art ResNet architecture as our basis, we explore the effect of curriculum learning, transfer learning, and varying network depth on the accuracy of malignancy classification. Due to a lack of public datasets with standardized problem definitions and train/test splits, studies in this area tend to not compare directly against other existing work. This makes it hard to know the relative improvement in the new solution. In contrast, we directly compare our system against two state-of-the-art deep learning systems for nodule classification on the LIDC/IDRI dataset using the same experimental setup and data set. The results show that our system achieves the highest performance in terms of all metrics measured including sensitivity, specificity, precision, AUROC, and accuracy. The proposed method of combining deep residual learning, curriculum learning, and transfer learning translates to high nodule classification accuracy. This reveals a promising new direction for effective pulmonary nodule CAD systems that mirrors the success of recent deep learning advances in other image-based application domains.", "title": "" }, { "docid": "7e949c7cd50d1e381f58fe26f9736124", "text": "Mental illness is one of the most undertreated health problems worldwide. Previous work has shown that there are remarkably strong cues to mental illness in short samples of the voice. These cues are evident in severe forms of illness, but it would be most valuable to make earlier diagnoses from a richer feature set. Furthermore there is an abstraction gap between these voice cues and the diagnostic cues used by practitioners. We believe that by closing this gap, we can build more effective early diagnostic systems for mental illness. In order to develop improved monitoring, we need to translate the high-level cues used by practitioners into features that can be analyzed using signal processing and machine learning techniques. In this paper we describe the elicitation process that we used to tap the practitioners' knowledge. We borrow from both AI (expert systems) and HCI (contextual inquiry) fields in order to perform this knowledge transfer. The paper highlights an unusual and promising role for HCI - the analysis of interaction data for health diagnosis.", "title": "" } ]
scidocsrr
274c00be5f61e8d94bb71e89efa7561f
"How Many Silences Are There?" Men's Experience of Victimization in Intimate Partner Relationships.
[ { "docid": "ccabfee18c9b3dfc322d55572f24f53a", "text": "The concept of hegemonic masculinity has influenced gender studies across many academic fields but has also attracted serious criticism. The authors trace the origin of the concept in a convergence of ideas in the early 1980s and map the ways it was applied when research on men and masculinities expanded. Evaluating the principal criticisms, the authors defend the underlying concept of masculinity, which in most research use is neither reified nor essentialist. However, the criticism of trait models of gender and rigid typologies is sound. The treatment of the subject in research on hegemonic masculinity can be improved with the aid of recent psychological models, although limits to discursive flexibility must be recognized. The concept of hegemonic masculinity does not equate to a model of social reproduction; we need to recognize social struggles in which subordinated masculinities influence dominant forms. Finally, the authors review what has been confirmed from early formulations (the idea of multiple masculinities, the concept of hegemony, and the emphasis on change) and what needs to be discarded (onedimensional treatment of hierarchy and trait conceptions of gender). The authors suggest reformulation of the concept in four areas: a more complex model of gender hierarchy, emphasizing the agency of women; explicit recognition of the geography of masculinities, emphasizing the interplay among local, regional, and global levels; a more specific treatment of embodiment in contexts of privilege and power; and a stronger emphasis on the dynamics of hegemonic masculinity, recognizing internal contradictions and the possibilities of movement toward gender democracy.", "title": "" } ]
[ { "docid": "4b7e71b412770cbfe059646159ec66ca", "text": "We present empirical evidence to demonstrate that there is little or no difference between the Java Virtual Machine and the .NET Common Language Runtime, as regards the compilation and execution of object-oriented programs. Then we give details of a case study that proves the superiority of the Common Language Runtime as a target for imperative programming language compilers (in particular GCC).", "title": "" }, { "docid": "f5ba6ef8d99ccc57bf64f7e5c3c05f7e", "text": "Applications of fuzzy logic (FL) to power electronics and drives are on the rise. The paper discusses some representative applications of FL in the area, preceded by an interpretative review of fuzzy logic controller (FLC) theory. A discussion on design and implementation aspects is presented, that also considers the interaction of neural networks and fuzzy logic techniques. Finally, strengths and limitations of FLC are considered, including possible applications in the area.", "title": "" }, { "docid": "a1c126807088d954b73c2bd5d696c481", "text": "or, why space syntax works when it looks as though it shouldn't 0 Abstract A common objection to the space syntax analysis of cities is that even in its own terms the technique of using a non-uniform line representation of space and analysing it by measures that are essentially topological, ignores too much geometric and metric detail to be credible. In this paper it is argued that far from ignoring geometric and metric properties the 'line-graph' internalises them into the structure of the graph and in doing so allows the graph analysis to pick up the nonlocal, or extrinsic, properties of spaces that are critical to the movement dynamics through which a city evolves its essential structures. Nonlocal properties are those which are defined by the relation of elements to all others in the system, rather than intrinsic to the element itself. The method also leads to a powerful analysis of urban structures because cities are essentially nonlocal systems. 1 Preliminaries 1.1 The critique of line graphs Space syntax is a family of techniques for representing and analysing spatial layouts of all kinds. A spatial representation is first chosen according to how space is defined for the purposes of the research-rooms, convex spaces, lines, convex isovists, and so on-and then one or more measures of 'configuration' are selected to analyse the patterns formed by that representation. Prior to the researcher setting up the research question, no one representation or measure is privileged over others. Part of the researcher's task is to discover which representation and which measure captures the logic of a particular system, as shown by observation of its functioning. In the study of cities, one representation and one type of measure has proved more consistently fruitful than others: the representation of urban space as a matrix of the 'longest and fewest' lines, the 'axial map', and the analysis of this by translating the line matrix into a graph, and the use of the various versions of the 'topological' (i.e. nonmetric) measure of patterns of line connectivity called 'integration'. (Hillier et al 1982, Steadman 1983, Hillier & Hanson 1984) This 'line graph' approach has proved quite unexpectedly successful. It has generated not only models for predicting urban et al 1998), but also strong theoretical results on urban structure, and even a general theory of the dynamics linking the urban grid, movement, land uses and building densities in 'organic' cities …", "title": "" }, { "docid": "a2e91a00e2f3bc23b5de83ca39566c84", "text": "This paper addresses an emerging new field of research that combines the strengths and capabilities of electronics and textiles in one: electronic textiles, or e-textiles. E-textiles, also called Smart Fabrics, have not only \"wearable\" capabilities like any other garment, but also local monitoring and computation, as well as wireless communication capabilities. Sensors and simple computational elements are embedded in e-textiles, as well as built into yarns, with the goal of gathering sensitive information, monitoring vital statistics and sending them remotely (possibly over a wireless channel) for further processing. Possible applications include medical (infant or patient) monitoring, personal information processing systems, or remote monitoring of deployed personnel in military or space applications. We illustrate the challenges imposed by the dual textile/electronics technology on their modeling and optimization methodology.", "title": "" }, { "docid": "9244b687b0031e895cea1fcf5a0b11da", "text": "Bacopa monnieri (L.) Wettst., a traditional Indian medicinal plant with high commercial potential, is used as a potent nervine tonic. A slow growth protocol was developed for medium-term conservation using mineral oil (MO) overlay. Nodal segments of B. monnieri (two genotypes; IC249250, IC468878) were conserved using MO for 24 months. Single node explants were implanted on MS medium supplemented with 0.2 mg l−1 BA and were covered with MO. Subculture duration could be significantly enhanced from 6 to 24 months, on the above medium. Normal plants regenerated from conserved cultures were successfully established in soil. On the basis of 20 random amplified polymorphic DNA and 5 inter-simple sequence repeat primers analyses and bacoside A content using HPLC, no significant reproducible variation was observed between the controls and in vitro-conserved plants. The results demonstrate the feasibility of using MO for medium-term conservation of B. monnieri germplasm without any adverse genetical and biochemical effects.", "title": "" }, { "docid": "18ab36acafc5e0d39d02cecb0db2f7b3", "text": "Trigeminal trophic syndrome is a rare complication after peripheral or central damage to the trigeminal nerve, characterized by sensorial impairment in the trigeminal nerve territory and self-induced nasal ulceration. Conditions that can affect the trigeminal nerve include brainstem cerebrovascular disease, diabetes, tabes, syringomyelia, and postencephalopathic parkinsonism; it can also occur following the surgical management of trigeminal neuralgia. Trigeminal trophic syndrome may develop months to years after trigeminal nerve insult. Its most common presentation is a crescent-shaped ulceration within the trigeminal sensory territory. The ala nasi is the most frequently affected site. Trigeminal trophic syndrome is notoriously difficult to diagnose and manage. A clear history is of paramount importance, with exclusion of malignant, fungal, granulomatous, vasculitic, or infective causes. We present a case of ulceration of the left ala nasi after brainstem cerebrovascular accident.", "title": "" }, { "docid": "fa440af1d9ec65caf3cd37981919b56e", "text": "We present a method for spotting sporadically occurring gestures in a continuous data stream from body-worn inertial sensors. Our method is based on a natural partitioning of continuous sensor signals and uses a two-stage approach for the spotting task. In a first stage, signal sections likely to contain specific motion events are preselected using a simple similarity search. Those preselected sections are then further classified in a second stage, exploiting the recognition capabilities of hidden Markov models. Based on two case studies, we discuss implementation details of our approach and show that it is a feasible strategy for the spotting of various types of motion events. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d622d45275c7d4c177aaf3e34eb8062b", "text": "Detecting which tweets describe a specific event and clustering them is one of the main challenging tasks related to Social Media currently addressed in the NLP community. Existing approaches have mainly focused on detecting spikes in clusters around specific keywords or Named Entities (NE). However, one of the main drawbacks of such approaches is the difficulty in understanding when the same keywords describe different events. In this paper, we propose a novel approach that exploits NE mentions in tweets and their entity context to create a temporal event graph. Then, using simple graph theory techniques and a PageRank-like algorithm, we process the event graphs to detect clusters of tweets describing the same events. Experiments on two gold standard datasets show that our approach achieves state-of-the-art results both in terms of evaluation performances and the quality of the detected events.", "title": "" }, { "docid": "5fa0e48da2045baa1f00a27a9baa4897", "text": "The inferred cost of work-related stress call for prevention strategies that aim at detecting early warning signs at the workplace. This paper goes one step towards the goal of developing a personal health system for detecting stress. We analyze the discriminative power of electrodermal activity (EDA) in distinguishing stress from cognitive load in an office environment. A collective of 33 subjects underwent a laboratory intervention that included mild cognitive load and two stress factors, which are relevant at the workplace: mental stress induced by solving arithmetic problems under time pressure and psychosocial stress induced by social-evaluative threat. During the experiments, a wearable device was used to monitor the EDA as a measure of the individual stress reaction. Analysis of the data showed that the distributions of the EDA peak height and the instantaneous peak rate carry information about the stress level of a person. Six classifiers were investigated regarding their ability to discriminate cognitive load from stress. A maximum accuracy of 82.8% was achieved for discriminating stress from cognitive load. This would allow keeping track of stressful phases during a working day by using a wearable EDA device.", "title": "" }, { "docid": "472036e178742f009537acce8a54c863", "text": "This paper presents a comparative study of highspeed, low-power and low voltage full adder circuits. Our approach is based on XOR-XNOR (4T) design full adder circuits combined in a single unit. This technique helps in reducing the power consumption and the propagation delay while maintaining low complexity of logic design. Simulation results illustrate the superiority of the designed adder circuits against the conventional CMOS, TG and Hybrid adder circuits in terms of power, delay and power delay product (PDP) at low voltage. Noise analysis shows designed full adder circuit's work at high frequency and high temperature satisfactorily. Simulation results reveal that the designed circuits exhibit lower PDP, more power efficiency and faster when compared to the available full adder circuits at low voltage. The design is implemented on UMC 0.18µm process models in Cadence Virtuoso Schematic Composer at 1.8 V single ended supply voltage and simulations are carried out on Spectre S.", "title": "" }, { "docid": "3a68bf0d9d79a8b7794ea9d5d236eb41", "text": "This paper describes a camera-based observation system for football games that is used for the automatic analysis of football games and reasoning about multi-agent activity. The observation system runs on video streams produced by cameras set up for TV broadcasting. The observation system achieves reliability and accuracy through various mechanisms for adaptation, probabilistic estimation, and exploiting domain constraints. It represents motions compactly and segments them into classified ball actions.", "title": "" }, { "docid": "07fbce97ec4e5e7fd176507b64b01e33", "text": "Drought and heat-induced forest dieback and mortality are emerging global concerns. Although Mediterranean-type forest (MTF) ecosystems are considered to be resilient to drought and other disturbances, we observed a sudden and unprecedented forest collapse in a MTF in Western Australia corresponding with record dry and heat conditions in 2010/2011. An aerial survey and subsequent field investigation were undertaken to examine: the incidence and severity of canopy dieback and stem mortality, associations between canopy health and stand-related factors as well as tree species response. Canopy mortality was found to be concentrated in distinct patches, representing 1.5 % of the aerial sample (1,350 ha). Within these patches, 74 % of all measured stems (>1 cm DBHOB) had dying or recently killed crowns, leading to 26 % stem mortality six months following the collapse. Patches of canopy collapse were more densely stocked with the dominant species, Eucalyptus marginata, and lacked the prominent midstorey species Banksia grandis, compared to the surrounding forest. A differential response to the disturbance was observed among co-occurring tree species, which suggests contrasting strategies for coping with extreme water stress. These results suggest that MTFs, once thought to be resilient to climate change, are susceptible to sudden and severe forest collapse when key thresholds have been reached.", "title": "" }, { "docid": "72aef0bd0793116983c11883ebfb5525", "text": "Building facade classification by architectural styles allows categorization of large databases of building images into semantic categories belonging to certain historic periods, regions and cultural influences. Image databases sorted by architectural styles permit effective and fast image search for the purposes of content-based image retrieval, 3D reconstruction, 3D city-modeling, virtual tourism and indexing of cultural heritage buildings. Building facade classification is viewed as a task of classifying separate architectural structural elements, like windows, domes, towers, columns, etc, as every architectural style applies certain rules and characteristic forms for the design and construction of the structural parts mentioned. In the context of building facade architectural style classification the current paper objective is to classify the architectural style of facade windows. Typical windows belonging to Romanesque, Gothic and Renaissance/Baroque European main architectural periods are classified. The approach is based on clustering and learning of local features, applying intelligence that architects use to classify windows of the mentioned architectural styles in the training stage.", "title": "" }, { "docid": "922a4369bf08f23e1c0171dc35d5642b", "text": "Most automated facial expression analysis methods treat the face as a 2D object, flat like a sheet of paper. That works well provided images are frontal or nearly so. In real-world conditions, moderate to large head rotation is common and system performance to recognize expression degrades. Multi-view Convolutional Neural Networks (CNNs) have been proposed to increase robustness to pose, but they require greater model sizes and may generalize poorly across views that are not included in the training set. We propose FACSCaps architecture to handle multi-view and multi-label facial action unit (AU) detection within a single model that can generalize to novel views. Additionally, FACSCaps's ability to synthesize faces enables insights into what is leaned by the model. FACSCaps models video frames using matrix capsules, where hierarchical pose relationships between face parts are built into internal representations. The model is trained by jointly optimizing a multi-label loss and the reconstruction accuracy. FACSCaps was evaluated using the FERA 2017 facial expression dataset that includes spontaneous facial expressions in a wide range of head orientations. FACSCaps outperformed both state-of-the-art CNNs and their temporal extensions.", "title": "" }, { "docid": "4106a8cf90180e237fdbe847c13d0126", "text": "The Internet has witnessed the proliferation of applications and services that rely on HTTP as application protocol. Users play games, read emails, watch videos, chat and access web pages using their PC, which in turn downloads tens or hundreds of URLs to fetch all the objects needed to display the requested content. As result, billions of URLs are observed in the network. When monitoring the traffic, thus, it is becoming more and more important to have methodologies and tools that allow one to dig into this data and extract useful information. In this paper, we present CLUE, Clustering for URL Exploration, a methodology that leverages clustering algorithms, i.e., unsupervised techniques developed in the data mining field to extract knowledge from passive observation of URLs carried by the network. This is a challenging problem given the unstructured format of URLs, which, being strings, call for specialized approaches. Inspired by text-mining algorithms, we introduce the concept of URL-distance and use it to compose clusters of URLs using the well-known DBSCAN algorithm. Experiments on actual datasets show encouraging results. Well-separated and consistent clusters emerge and allow us to identify, e.g., malicious traffic, advertising services, and thirdparty tracking systems. In a nutshell, our clustering algorithm offers the means to get insights on the data carried by the network, with applications in the security or privacy protection fields.", "title": "" }, { "docid": "d12e99d6dc078d24a171f921ac0ef4d3", "text": "An omni-directional rolling spherical robot equipped with a high-rate flywheel (BYQ-V) is presented, the gyroscopic effects of high-rate flywheel can further enhance the dynamic stability of the spherical robot. This robot is designed for territory or lunar exploration in the future. The mechanical structure and control system of the robot are given particularly. Using the constrained Lagrangian method, the simplified dynamic model of the robot is derived under some assumptions, Moreover, a Linear Quadratic Regulator (LQR) controller and Percentage Derivative (PD) controller are designed to implement the pose and velocity control of the robot respectively, Finally, the dynamic model and the controllers are validated through simulation study and prototype experiment.", "title": "" }, { "docid": "429f27ab8039a9e720e9122f5b1e3bea", "text": "We give a new method for direct reconstruction of three-dimensional objects from a few electron micrographs taken at angles which need not exceed a range of 60 degrees. The method works for totally asymmetric objects, and requires little computer time or storage. It is also applicable to X-ray photography, and may greatly reduce the exposure compared to current methods of body-section radiography.", "title": "" }, { "docid": "bde03a5d90507314ce5f034b9b764417", "text": "Autonomous household robots are supposed to accomplish complex tasks like cleaning the dishes which involve both navigation and manipulation within the environment. For navigation, spatial information is mostly sufficient, but manipulation tasks raise the demand for deeper knowledge about objects, such as their types, their functions, or the way how they can be used. We present KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for, and with knowledge derived from observations of human activities by learning statistical relational models. In this paper, we describe the concept and implementation of KNOWROB-MAP and present several examples demonstrating the range of information the system can provide to autonomous robots.", "title": "" }, { "docid": "92dca681aa54142d24e3b7bf1854a2d2", "text": "Holographic Recurrent Networks (HRNs) are recurrent networks which incorporate associative memory techniques for storing sequential structure. HRNs can be easily and quickly trained using gradient descent techniques to generate sequences of discrete outputs and trajectories through continuous space. The performance of HRNs is found to be superior to that of ordinary recurrent networks on these sequence generation tasks.", "title": "" }, { "docid": "9d73ff3f8528bb412c585d802873fcb4", "text": "In this work, we introduce a novel interpretation of residual networks showing they are exponential ensembles. This observation is supported by a large-scale lesion study that demonstrates they behave just like ensembles at test time. Subsequently, we perform an analysis showing these ensembles mostly consist of networks that are each relatively shallow. For example, contrary to our expectations, most of the gradient in a residual network with 110 layers comes from an ensemble of very short networks, i.e., only 10-34 layers deep. This suggests that in addition to describing neural networks in terms of width and depth, there is a third dimension: multiplicity, the size of the implicit ensemble. Ultimately, residual networks do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network – rather, they avoid the problem simply by ensembling many short networks together. This insight reveals that depth is still an open research question and invites the exploration of the related notion of multiplicity.", "title": "" } ]
scidocsrr
d17f2cc0093908c1a716ab0b788169e8
RoarNet: A Robust 3D Object Detection based on RegiOn Approximation Refinement
[ { "docid": "df609125f353505fed31eee302ac1742", "text": "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].", "title": "" }, { "docid": "a214ed60c288762210189f14a8cf8256", "text": "We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.", "title": "" }, { "docid": "73a62915c29942d2fac0570cac7eb3e0", "text": "In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the networks outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.", "title": "" } ]
[ { "docid": "2fd7cc65c34551c90a72fc3cb4665336", "text": "Generating natural language requires conveying content in an appropriate style. We explore two related tasks on generating text of varying formality: monolingual formality transfer and formality-sensitive machine translation. We propose to solve these tasks jointly using multi-task learning, and show that our models achieve state-of-the-art performance for formality transfer and are able to perform formality-sensitive translation without being explicitly trained on styleannotated translation examples.", "title": "" }, { "docid": "ee865e3291eff95b5977b54c22b59f19", "text": "Fuzzing is a process where random, almost valid, input streams are automatically generated and fed into computer systems in order to test the robustness of userexposed interfaces. We fuzz the Linux kernel system call interface; unlike previous work that attempts to generically fuzz all of an operating system’s system calls, we explore the effectiveness of using specific domain knowledge and focus on finding bugs and security issues related to a single Linux system call. The perf event open() system call was introduced in 2009 and has grown to be a complex interface with over 40 arguments that interact in subtle ways. By using detailed knowledge of typical perf event usage patterns we develop a custom tool, perf fuzzer, that has found bugs that more generic, system-wide, fuzzers have missed. Numerous crashing bugs have been found, including a local root exploit. Fixes for these bugs have been merged into the main Linux source tree. Testing continues to find new bugs, although they are increasingly hard to isolate, requiring development of new isolation techniques and helper utilities. We describe the development of perf fuzzer, examine the bugs found, and discuss ways that this work can be extended to find more bugs and cover other system calls.", "title": "" }, { "docid": "661d5db6f4a8a12b488d6f486ea5995e", "text": "Reliability and high availability have always been a major concern in distributed systems. Providing highly available and reliable services in cloud computing is essential for maintaining customer confidence and satisfaction and preventing revenue losses. Although various solutions have been proposed for cloud availability and reliability, but there are no comprehensive studies that completely cover all different aspects in the problem. This paper presented a ‘Reference Roadmap’ of reliability and high availability in cloud computing environments. A big picture was proposed which was divided into four steps specifying through four pivotal questions starting with ‘Where?’, ‘Which?’, ‘When?’ and ‘How?’ keywords. The desirable result of having a highly available and reliable cloud system could be gained by answering these questions. Each step of this reference roadmap proposed a specific concern of a special portion of the issue. Two main research gaps were proposed by this reference roadmap.", "title": "" }, { "docid": "cef79010b9772639d42351c960b68c83", "text": "In many real world elections, agents are not required to rank all candidates. We study three of the most common meth ods used to modify voting rules to deal with such partial votes. These methods modify scoring rules (like the Borda count), e limination style rules (like single transferable vote) and rule s based on the tournament graph (like Copeland) respectively. We argu e that with an elimination style voting rule like single transfera ble vote, partial voting does not change the situations where strateg ic voting is possible. However, with scoring rules and rules based on the tournament graph, partial voting can increase the situations wher e strategic voting is possible. As a consequence, the computational com plexity of computing a strategic vote can change. For example, with B orda count, the complexity of computing a strategic vote can decr ease or stay the same depending on how we score partial votes.", "title": "" }, { "docid": "8af777a64f8f2127552a05c8ea462416", "text": "This work addresses the issue of fire and smoke detection in a scene within a video surveillance framework. Detection of fire and smoke pixels is at first achieved by means of a motion detection algorithm. In addition, separation of smoke and fire pixels using colour information (within appropriate spaces, specifically chosen in order to enhance specific chromatic features) is performed. In parallel, a pixel selection based on the dynamics of the area is carried out in order to reduce false detection. The output of the three parallel algorithms are eventually fused by means of a MLP.", "title": "" }, { "docid": "fca58dee641af67f9bb62958b5b088f2", "text": "This work explores the possibility of mixing two different fingerprints, pertaining to two different fingers, at the image level in order to generate a new fingerprint. To mix two fingerprints, each fingerprint pattern is decomposed into two different components, viz., the continuous and spiral components. After prealigning the components of each fingerprint, the continuous component of one fingerprint is combined with the spiral component of the other fingerprint. Experiments on the West Virginia University (WVU) and FVC2002 datasets show that mixing fingerprints has several benefits: (a) it can be used to generate virtual identities from two different fingers; (b) it can be used to obscure the information present in an individual's fingerprint image prior to storing it in a central database; and (c) it can be used to generate a cancelable fingerprint template, i.e., the template can be reset if the mixed fingerprint is compromised.", "title": "" }, { "docid": "b14010454fe4b9f9712c13cbf9a5e23b", "text": "In this paper we propose an approach to Part of Speech (PoS) tagging using a combination of Hidden Markov Model and error driven learning. For the NLPAI joint task, we also implement a chunker using Conditional Random Fields (CRFs). The results for the PoS tagging and chunking task are separately reported along with the results of the joint task.", "title": "" }, { "docid": "44ffac24ef4d30a8104a2603bb1cdcb1", "text": "Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast/Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them “Networks on Convolutional feature maps” (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015.", "title": "" }, { "docid": "69d8d5b38456b30d3252d95cb43734cf", "text": "Article prepared for a revised edition of the ENCYCLOPEDIA OF ARTIFICIAL INTELLIGENCE, S. Shapiro (editor), to be published by John Wiley, 1992. Final Draft; DO NOT REPRODUCE OR CIRCULATE. This copy is for review only. Please do not cite or copy. Prepared using troff, pic, eqn, tbl and bib under Unix 4.3 BSD.", "title": "" }, { "docid": "e61a0ba24db737d42a730d5738583ffa", "text": "We present a logical formalism for expressing properties of continuous time Markov chains. The semantics for such properties arise as a natural extension of previous work on discrete time Markov chains to continuous time. The major result is that the veriication problem is decidable; this is shown using results in algebraic and transcendental number theory.", "title": "" }, { "docid": "533b8bf523a1fb69d67939607814dc9c", "text": "Docker is an open platform for developers and system administrators to build, ship, and run distributed applications using Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows. The main advantage is that, Docker can get code tested and deployed into production as fast as possible. Different applications can be run over Docker containers with language independency. In this paper the performance of these Docker containers are evaluated based on their system performance. That is based on system resource utilization. Different benchmarking tools are used for this. Performance based on file system is evaluated using Bonnie++. Other system resources such as CPU utilization, memory utilization etc. are evaluated based on the benchmarking code (using psutil) developed using python. Detail results obtained from all these tests are also included in this paper. The results include CPU utilization, memory utilization, CPU count, CPU times, Disk partition, network I/O counter etc.", "title": "" }, { "docid": "68b2608c91525f3147f74b41612a9064", "text": "Protective effects of sweet orange (Citrus sinensis) peel and their bioactive compounds on oxidative stress were investigated. According to HPLC-DAD and HPLC-MS/MS analysis, hesperidin (HD), hesperetin (HT), nobiletin (NT), and tangeretin (TT) were present in water extracts of sweet orange peel (WESP). The cytotoxic effect in 0.2mM t-BHP-induced HepG2 cells was inhibited by WESP and their bioactive compounds. The protective effect of WESP and their bioactive compounds in 0.2mM t-BHP-induced HepG2 cells may be associated with positive regulation of GSH levels and antioxidant enzymes, decrease in ROS formation and TBARS generation, increase in the mitochondria membrane potential and Bcl-2/Bax ratio, as well as decrease in caspase-3 activation. Overall, WESP displayed a significant cytoprotective effect against oxidative stress, which may be most likely because of the phenolics-related bioactive compounds in WESP, leading to maintenance of the normal redox status of cells.", "title": "" }, { "docid": "dea52c761a9f4d174e9bd410f3f0fa38", "text": "Much computational work has been done on identifying and interpreting the meaning of metaphors, but little work has been done on understanding the motivation behind the use of metaphor. To computationally model discourse and social positioning in metaphor, we need a corpus annotated with metaphors relevant to speaker intentions. This paper reports a corpus study as a first step towards computational work on social and discourse functions of metaphor. We use Amazon Mechanical Turk (MTurk) to annotate data from three web discussion forums covering distinct domains. We then compare these to annotations from our own annotation scheme which distinguish levels of metaphor with the labels: nonliteral, conventionalized, and literal. Our hope is that this work raises questions about what new work needs to be done in order to address the question of how metaphors are used to achieve social goals in interaction.", "title": "" }, { "docid": "a03d0772d8c3e1fd5c954df2b93757e3", "text": "The tumor microenvironment is a complex system, playing an important role in tumor development and progression. Besides cellular stromal components, extracellular matrix fibers, cytokines, and other metabolic mediators are also involved. In this review we outline the potential role of hypoxia, a major feature of most solid tumors, within the tumor microenvironment and how it contributes to immune resistance and immune suppression/tolerance and can be detrimental to antitumor effector cell functions. We also outline how hypoxic stress influences immunosuppressive pathways involving macrophages, myeloid-derived suppressor cells, T regulatory cells, and immune checkpoints and how it may confer tumor resistance. Finally, we discuss how microenvironmental hypoxia poses both obstacles and opportunities for new therapeutic immune interventions.", "title": "" }, { "docid": "e0b8b4e916f5e4799ad2ab95d71b0b26", "text": "Automation plays a very important role in every field of human life. This paper contains the proposal of a fully automated menu ordering system in which the paper based menu is replaced by a user friendly Touchscreen based menu card. The system has PIC microcontroller which is interfaced with the input and output modules. The input module is the touchscreen sensor which is placed on GLCD (Graphical Liquid Crystal Display) to have a graphic image display, which takes the input from the user and provides the same information to the microcontroller. The output module is a Zigbee module which is used for communication between system at the table and system for receiving section. Microcontroller also displays the menu items on the GLCD. At the receiving end the selected items will be displayed on the LCD and by using the conveyer belt the received order will send to the particular table.", "title": "" }, { "docid": "257ffbc75578916dc89a703598ac0447", "text": "Implant surgery in mandibular anterior region may turn from an easy minor surgery into a complicated one for the surgeon, due to inadequate knowledge of the anatomy of the surgical area and/or ignorance toward the required surgical protocol. Hence, the purpose of this article is to present an overview on the: (a) Incidence of massive bleeding and its consequences after implant placement in mandibular anterior region. (b) Its etiology, the precautionary measures to be taken to avoid such an incidence in clinical practice and management of such a hemorrhage if at all happens. An inclusion criterion for selection of article was defined, and an electronic Medline search through different database using different keywords and manual search in journals and books was executed. Relevant articles were selected based upon inclusion criteria to form the valid protocols for implant surgery in the anterior mandible. Further, from the selected articles, 21 articles describing case reports were summarized separately in a table to alert the dental surgeons about the morbidity they could come across while operating in this region. If all the required adequate measures for diagnosis and treatment planning are taken and appropriate surgical protocol is followed, mandibular anterior region is no doubt a preferable area for implant placement.", "title": "" }, { "docid": "f3e9858900dd75c86d106856e63f1ab2", "text": "In the near future, new storage-class memory (SCM) technologies -- such as phase-change memory and memristors -- will radically change the nature of long-term storage. These devices will be cheap, non-volatile, byte addressable, and near DRAM density and speed. While SCM offers enormous opportunities, profiting from them will require new storage systems specifically designed for SCM's properties.\n This paper presents Echo, a persistent key-value storage system designed to leverage the advantages and address the challenges of SCM. The goals of Echo include high performance for both small and large data objects, recoverability after failure, and scalability on multicore systems. Echo achieves its goals through the use of a two-level memory design targeted for memory systems containing both DRAM and SCM, exploitation of SCM's byte addressability for fine-grained transactions in non-volatile memory, and the use of snapshot isolation for concurrency, consistency, and versioning. Our evaluation demonstrates that Echo's SCM-centric design achieves the durability guarantees of the best disk-based stores with the performance characteristics approaching the best in-memory key-value stores.", "title": "" }, { "docid": "809392d489af5e1f8e85a9ad8a8ba9e0", "text": "Although a large number of ion channels are now believed to be regulated by phosphoinositides, particularly phosphoinositide 4,5-bisphosphate (PIP2), the mechanisms involved in phosphoinositide regulation are unclear. For the TRP superfamily of ion channels, the role and mechanism of PIP2 modulation has been especially difficult to resolve. Outstanding questions include: is PIP2 the endogenous regulatory lipid; does PIP2 potentiate all TRPs or are some TRPs inhibited by PIP2; where does PIP2 interact with TRP channels; and is the mechanism of modulation conserved among disparate subfamilies? We first addressed whether the PIP2 sensor resides within the primary sequence of the channel itself, or, as recently proposed, within an accessory integral membrane protein called Pirt. Here we show that Pirt does not alter the phosphoinositide sensitivity of TRPV1 in HEK-293 cells, that there is no FRET between TRPV1 and Pirt, and that dissociated dorsal root ganglion neurons from Pirt knock-out mice have an apparent affinity for PIP2 indistinguishable from that of their wild-type littermates. We followed by focusing on the role of the C terminus of TRPV1 in sensing PIP2. Here, we show that the distal C-terminal region is not required for PIP2 regulation, as PIP2 activation remains intact in channels in which the distal C-terminal has been truncated. Furthermore, we used a novel in vitro binding assay to demonstrate that the proximal C-terminal region of TRPV1 is sufficient for PIP2 binding. Together, our data suggest that the proximal C-terminal region of TRPV1 can interact directly with PIP2 and may play a key role in PIP2 regulation of the channel.", "title": "" }, { "docid": "b19e77ddb2c2ca5cc18bd8ba5425a698", "text": "In pharmaceutical formulations, phospholipids obtained from plant or animal sources and synthetic phospholipids are used. Natural phospholipids are purified from, e.g., soybeans or egg yolk using non-toxic solvent extraction and chromatographic procedures with low consumption of energy and minimum possible waste. Because of the use of validated purification procedures and sourcing of raw materials with consistent quality, the resulting products differing in phosphatidylcholine content possess an excellent batch to batch reproducibility with respect to phospholipid and fatty acid composition. The natural phospholipids are described in pharmacopeias and relevant regulatory guidance documentation of the Food and Drug Administration (FDA) and European Medicines Agency (EMA). Synthetic phospholipids with specific polar head group, fatty acid composition can be manufactured using various synthesis routes. Synthetic phospholipids with the natural stereochemical configuration are preferably synthesized from glycerophosphocholine (GPC), which is obtained from natural phospholipids, using acylation and enzyme catalyzed reactions. Synthetic phospholipids play compared to natural phospholipid (including hydrogenated phospholipids), as derived from the number of drug products containing synthetic phospholipids, a minor role. Only in a few pharmaceutical products synthetic phospholipids are used. Natural phospholipids are used in oral, dermal, and parenteral products including liposomes. Natural phospholipids instead of synthetic phospholipids should be selected as phospholipid excipients for formulation development, whenever possible, because natural phospholipids are derived from renewable sources and produced with more ecologically friendly processes and are available in larger scale at relatively low costs compared to synthetic phospholipids. Practical applications: For selection of phospholipid excipients for pharmaceutical formulations, natural phospholipids are preferred compared to synthetic phospholipids because they are available at large scale with reproducible quality at lower costs of goods. They are well accepted by regulatory authorities and are produced using less chemicals and solvents at higher yields. In order to avoid scale up problems during pharmaceutical development and production, natural phospholipid excipients instead of synthetic phospholipids should be selected whenever possible.", "title": "" }, { "docid": "d372c1fba12412dac5dc850baf3267b9", "text": "Smart grid is an intelligent power network featured by its two-way flows of electricity and information. With an integrated communication infrastructure, smart grid manages the operation of all connected components to provide reliable and sustainable electricity supplies. Many advanced communication technologies have been identified for their applications in different domains of smart grid networks. This paper focuses on wireless communication networking technologies for smart grid neighborhood area networks (NANs). In particular, we aim to offer a comprehensive survey to address various important issues on implementation of smart grid NANs, including network topology, gateway deployment, routing algorithms, and security. We will identify four major challenges for the implementation of NANs, including timeliness management, security assurance, compatibility design, and cognitive spectrum access, based on which the future research directions are suggested.", "title": "" } ]
scidocsrr
7a6c13536dd2b138cdfdf822f28d8869
A lightweight active service migration framework for computational offloading in mobile cloud computing
[ { "docid": "0e55e64ddc463d0ea151de8efe40183f", "text": "Vehicular networking has become a significant research area due to its specific features and applications such as standardization, efficient traffic management, road safety and infotainment. Vehicles are expected to carry relatively more communication systems, on board computing facilities, storage and increased sensing power. Hence, several technologies have been deployed to maintain and promote Intelligent Transportation Systems (ITS). Recently, a number of solutions were proposed to address the challenges and issues of vehicular networks. Vehicular Cloud Computing (VCC) is one of the solutions. VCC is a new hybrid technology that has a remarkable impact on traffic management and road safety by instantly using vehicular resources, such as computing, storage and internet for decision making. This paper presents the state-of-the-art survey of vehicular cloud computing. Moreover, we present a taxonomy for vehicular cloud in which special attention has been devoted to the extensive applications, cloud formations, key management, inter cloud communication systems, and broad aspects of privacy and security issues. Through an extensive review of the literature, we design an architecture for VCC, itemize the properties required in vehicular cloud that support this model. We compare this mechanism with normal Cloud Computing (CC) and discuss open research issues and future directions. By reviewing and analyzing literature, we found that VCC is a technologically feasible and economically viable technological shifting paradigm for converging intelligent vehicular networks towards autonomous traffic, vehicle control and perception systems. & 2013 Published by Elsevier Ltd.", "title": "" }, { "docid": "aa18c10c90af93f38c8fca4eff2aab09", "text": "The unabated flurry of research activities to augment various mobile devices by leveraging heterogeneous cloud resources has created a new research domain called Mobile Cloud Computing (MCC). In the core of such a non-uniform environment, facilitating interoperability, portability, and integration among heterogeneous platforms is nontrivial. Building such facilitators in MCC requires investigations to understand heterogeneity and its challenges over the roots. Although there are many research studies in mobile computing and cloud computing, convergence of these two areas grants further academic efforts towards flourishing MCC. In this paper, we define MCC, explain its major challenges, discuss heterogeneity in convergent computing (i.e. mobile computing and cloud computing) and networking (wired and wireless networks), and divide it into two dimensions, namely vertical and horizontal. Heterogeneity roots are analyzed and taxonomized as hardware, platform, feature, API, and network. Multidimensional heterogeneity in MCC results in application and code fragmentation problems that impede development of cross-platform mobile applications which is mathematically described. The impacts of heterogeneity in MCC are investigated, related opportunities and challenges are identified, and predominant heterogeneity handling approaches like virtualization, middleware, and service oriented architecture (SOA) are discussed. We outline open issues that help in identifying new research directions in MCC.", "title": "" } ]
[ { "docid": "7f799fbe03849971cb3272e35e7b13db", "text": "Text often expresses the writer's emotional state or evokes emotions in the reader. The nature of emotional phenomena like reading and writing can be interpreted in different ways and represented with different computational models. Affective computing (AC) researchers often use a categorical model in which text data is associated with emotional labels. We introduce a new way of using normative databases as a way of processing text with a dimensional model and compare it with different categorical approaches. The approach is evaluated using four data sets of texts reflecting different emotional phenomena. An emotional thesaurus and a bag-­‐of-­‐words model are used to generate vectors for each pseudo-­‐ document, then for the categorical models three dimensionality reduction techniques are evaluated: Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Non-­‐negative Matrix Factorization (NMF). For the dimensional model a normative database is used to produce three-­‐dimensional vectors (valence, arousal, dominance) for each pseudo-­‐document. This 3-­‐dimensional model can be used to generate psychologically driven visualizations. Both models can be used for affect detection based on distances amongst categories and pseudo-­‐documents. Experiments show that the categorical model using NMF and the dimensional model tend to perform best. 1. INTRODUCTION Emotions and affective states are pervasive in all forms of communication, including text based, and increasingly recognized as important to understanding the full meaning that a message conveys, or the impact it will have on readers. Given the increasing amounts of textual communication being produced (e.g. emails, user created content, published content) researchers are seeking automated language processing techniques that include models of emotions. Emotions and other affective states (e.g. moods) have been studied by many disciplines. Affect scientists have studied emotions since Darwin (Darwin, 1872), and different schools within psychology have produced different theories representing different ways of interpreting affective phenomena (comprehensively reviewed in Davidson, Scherer and Goldsmith, 2003). In the last decade technologists have also started contributing to this research. Affective Computing (AC) in particular is contributing new ways to improve communication between the sensitive human and the unemotionally computer. AC researchers have developed computational systems that recognize and respond to the affective states of the user (Calvo and D'Mello, 2010). Affect-­‐sensitive user interfaces are being developed in a number of domains including gaming, mental health, and learning technologies. The basic tenet behind most AC systems is that automatically recognizing and responding to a user's affective states during interactions with a computer, …", "title": "" }, { "docid": "74dead8ad89ae4a55105fb7ae95d3e20", "text": "Improved health is one of the many reasons people choose to adopt a vegetarian diet, and there is now a wealth of evidence to support the health benefi ts of a vegetarian diet. Abstract: There is now a significant amount of research that demonstrates the health benefits of vegetarian and plant-based diets, which have been associated with a reduced risk of obesity, diabetes, heart disease, and some types of cancer as well as increased longevity. Vegetarian diets are typically lower in fat, particularly saturated fat, and higher in dietary fiber. They are also likely to include more whole grains, legumes, nuts, and soy protein, and together with the absence of red meat, this type of eating plan may provide many benefits for the prevention and treatment of obesity and chronic health problems, including diabetes and cardiovascular disease. Although a well-planned vegetarian or vegan diet can meet all the nutritional needs of an individual, it may be necessary to pay particular attention to some nutrients to ensure an adequate intake, particularly if the person is on a vegan diet. This article will review the evidence for the health benefits of a vegetarian diet and also discuss strategies for meeting the nutritional needs of those following a vegetarian or plant-based eating pattern.", "title": "" }, { "docid": "84d8058c67870f8606b485e7ad430c58", "text": "Stanford typed dependencies are a widely desired representation of natural language sentences, but parsing is one of the major computational bottlenecks in text analysis systems. In light of the evolving definition of the Stanford dependencies and developments in statistical dependency parsing algorithms, this paper revisits the question of Cer et al. (2010): what is the tradeoff between accuracy and speed in obtaining Stanford dependencies in particular? We also explore the effects of input representations on this tradeoff: part-of-speech tags, the novel use of an alternative dependency representation as input, and distributional representaions of words. We find that direct dependency parsing is a more viable solution than it was found to be in the past. An accompanying software release can be found at: http://www.ark.cs.cmu.edu/TBSD", "title": "" }, { "docid": "a4a5c6cbec237c2cd6fb3abcf6b4a184", "text": "Developing automatic diagnostic tools for the early detection of skin cancer lesions in dermoscopic images can help to reduce melanoma-induced mortality. Image segmentation is a key step in the automated skin lesion diagnosis pipeline. In this paper, a fast and fully-automatic algorithm for skin lesion segmentation in dermoscopic images is presented. Delaunay Triangulation is used to extract a binary mask of the lesion region, without the need of any training stage. A quantitative experimental evaluation has been conducted on a publicly available database, by taking into account six well-known state-of-the-art segmentation methods for comparison. The results of the experimental analysis demonstrate that the proposed approach is highly accurate when dealing with benign lesions, while the segmentation accuracy significantly decreases when melanoma images are processed. This behavior led us to consider geometrical and color features extracted from the binary masks generated by our algorithm for classification, achieving promising results for melanoma detection.", "title": "" }, { "docid": "ced3a56c5469528e8fa5784dc0fff5d4", "text": "This paper explores the relation between a set of behavioural information security governance factors and employees’ information security awareness. To enable statistical analysis between proposed relations, data was collected from two different samples in 24 organisations: 24 information security executives and 240 employees. The results reveal that having a formal unit with explicit responsibility for information security, utilizing coordinating committees, and sharing security knowledge through an intranet site significantly correlates with dimensions of employees’ information security awareness. However, regular identification of vulnerabilities in information systems and related processes is significantly negatively correlated with employees’ information security awareness, in particular managing passwords. The effect of behavioural information security governance on employee information security awareness is an understudied topic. Therefore, this study is explorative in nature and the results are preliminary. Nevertheless, the paper provides implications for both research and practice.", "title": "" }, { "docid": "6e923a586a457521e9de9d4a9cab77ad", "text": "We present a new approach to the matting problem which splits the task into two steps: interactive trimap extraction followed by trimap-based alpha matting. By doing so we gain considerably in terms of speed and quality and are able to deal with high resolution images. This paper has three contributions: (i) a new trimap segmentation method using parametric max-flow; (ii) an alpha matting technique for high resolution images with a new gradient preserving prior on alpha; (iii) a database of 27 ground truth alpha mattes of still objects, which is considerably larger than previous databases and also of higher quality. The database is used to train our system and to validate that both our trimap extraction and our matting method improve on state-of-the-art techniques.", "title": "" }, { "docid": "0ad68f20acf338f4051a93ba5e273187", "text": "FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.", "title": "" }, { "docid": "105f34c3fa2d4edbe83d184b7cf039aa", "text": "Software development methodologies are constantly evolving due to changing technologies and new demands from users. Today's dynamic business environment has given rise to emergent organizations that continuously adapt their structures, strategies, and policies to suit the new environment [12]. Such organizations need information systems that constantly evolve to meet their changing requirements---but the traditional, plan-driven software development methodologies lack the flexibility to dynamically adjust the development process.", "title": "" }, { "docid": "b7eb2c65c459c9d5776c1e2cba84706c", "text": "Observers, searching for targets among distractor items, guide attention with a mix of top-down information--based on observers' knowledge--and bottom-up information--stimulus-based and largely independent of that knowledge. There are 2 types of top-down guidance: explicit information (e.g., verbal description) and implicit priming by preceding targets (top-down because it implies knowledge of previous searches). Experiments 1 and 2 separate bottom-up and top-down contributions to singleton search. Experiment 3 shows that priming effects are based more strongly on target than on distractor identity. Experiments 4 and 5 show that more difficult search for one type of target (color) can impair search for other types (size, orientation). Experiment 6 shows that priming guides attention and does not just modulate response.", "title": "" }, { "docid": "220acd23ebb9c69cfb9ee00b063468c6", "text": "This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.", "title": "" }, { "docid": "7b25d1c4d20379a8a0fabc7398ea2c28", "text": "In this paper we introduce an efficient and stable implicit SPH method for the physically-based simulation of incompressible fluids. In the area of computer graphics the most efficient SPH approaches focus solely on the correction of the density error to prevent volume compression. However, the continuity equation for incompressible flow also demands a divergence-free velocity field which is neglected by most methods. Although a few methods consider velocity divergence, they are either slow or have a perceivable density fluctuation.\n Our novel method uses an efficient combination of two pressure solvers which enforce low volume compression (below 0.01%) and a divergence-free velocity field. This can be seen as enforcing incompressibility both on position level and velocity level. The first part is essential for realistic physical behavior while the divergence-free state increases the stability significantly and reduces the number of solver iterations. Moreover, it allows larger time steps which yields a considerable performance gain since particle neighborhoods have to be updated less frequently. Therefore, our divergence-free SPH (DFSPH) approach is significantly faster and more stable than current state-of-the-art SPH methods for incompressible fluids. We demonstrate this in simulations with millions of fast moving particles.", "title": "" }, { "docid": "b8700283c7fb65ba2e814adffdbd84f8", "text": "Human immunoglobulin preparations for intravenous or subcutaneous administration are the cornerstone of treatment in patients with primary immunodeficiency diseases affecting the humoral immune system. Intravenous preparations have a number of important uses in the treatment of other diseases in humans as well, some for which acceptable treatment alternatives do not exist. We provide an update of the evidence-based guideline on immunoglobulin therapy, last published in 2006. Given the potential risks and inherent scarcity of human immunoglobulin, careful consideration of its indications and administration is warranted.", "title": "" }, { "docid": "c7e3fc9562a02818bba80d250241511d", "text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.", "title": "" }, { "docid": "5bf9aeb37fc1a82420b2ff4136f547d0", "text": "Visual Question Answering (VQA) is a popular research problem that involves inferring answers to natural language questions about a given visual scene. Recent neural network approaches to VQA use attention to select relevant image features based on the question. In this paper, we propose a novel Dual Attention Network (DAN) that not only attends to image features, but also to question features. The selected linguistic and visual features are combined by a recurrent model to infer the final answer. We experiment with different question representations and do several ablation studies to evaluate the model on the challenging VQA dataset.", "title": "" }, { "docid": "fc3c4f6c413719bbcf7d13add8c3d214", "text": "Disentangling the effects of selection and influence is one of social science's greatest unsolved puzzles: Do people befriend others who are similar to them, or do they become more similar to their friends over time? Recent advances in stochastic actor-based modeling, combined with self-reported data on a popular online social network site, allow us to address this question with a greater degree of precision than has heretofore been possible. Using data on the Facebook activity of a cohort of college students over 4 years, we find that students who share certain tastes in music and in movies, but not in books, are significantly likely to befriend one another. Meanwhile, we find little evidence for the diffusion of tastes among Facebook friends-except for tastes in classical/jazz music. These findings shed light on the mechanisms responsible for observed network homogeneity; provide a statistically rigorous assessment of the coevolution of cultural tastes and social relationships; and suggest important qualifications to our understanding of both homophily and contagion as generic social processes.", "title": "" }, { "docid": "f489e2c0d6d733c9e2dbbdb1d7355091", "text": "In many signal processing applications, the signals provided by the sensors are mixtures of many sources. The problem of separation of sources is to extract the original signals from these mixtures. A new algorithm, based on ideas of backpropagation learning, is proposed for source separation. No a priori information on the sources themselves is required, and the algorithm can deal even with non-linear mixtures. After a short overview of previous works in that eld, we will describe the proposed algorithm. Then, some experimental results will be discussed.", "title": "" }, { "docid": "e5261ee5ea2df8bae7cc82cb4841dea0", "text": "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.", "title": "" }, { "docid": "22c72f94040cd65dde8e00a7221d2432", "text": "Research on “How to create a fair, convenient attendance management system”, is being pursued by academics and government departments fervently. This study is based on the biometric recognition technology. The hand geometry machine captures the personal hand geometry data as the biometric code and applies this data in the attendance management system as the attendance record. The attendance records that use this technology is difficult to replicate by others. It can improve the reliability of the attendance records and avoid fraudulent issues that happen when you use a register. This research uses the social survey method-questionnaire to evaluate the theory and practice of introducing biometric recognition technology-hand geometry capturing into the attendance management system.", "title": "" }, { "docid": "ca655b741316e8c65b6b7590833396e1", "text": "• A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" }, { "docid": "69b3275cb4cae53b3a8888e4fe7f85f7", "text": "In this paper we propose a way to improve the K-SVD image denoising algorithm. The suggested method aims to reduce the gap that exists between the local processing (sparse-coding of overlapping patches) and the global image recovery (obtained by averaging the overlapping patches). Inspired by game-theory ideas, we define a disagreement-patch as the difference between the intermediate locally denoised patch and its corresponding part in the final outcome. Our algorithm iterates the denoising process several times, applied on modified patches. Those are obtained by subtracting the disagreement-patches from their corresponding input noisy ones, thus pushing the overlapping patches towards an agreement. Experimental results demonstrate the improvement this algorithm leads to.", "title": "" } ]
scidocsrr
75fcd9ee01bbccf5e009284699ff1a0d
Floral morphology as the main driver of flower-feeding insect occurrences in the Paris region
[ { "docid": "8f4a0c6252586fa01133f9f9f257ec87", "text": "The pls package implements principal component regression (PCR) and partial least squares regression (PLSR) in R (R Development Core Team 2006b), and is freely available from the Comprehensive R Archive Network (CRAN), licensed under the GNU General Public License (GPL). The user interface is modelled after the traditional formula interface, as exemplified by lm. This was done so that people used to R would not have to learn yet another interface, and also because we believe the formula interface is a good way of working interactively with models. It thus has methods for generic functions like predict, update and coef. It also has more specialised functions like scores, loadings and RMSEP, and a flexible crossvalidation system. Visual inspection and assessment is important in chemometrics, and the pls package has a number of plot functions for plotting scores, loadings, predictions, coefficients and RMSEP estimates. The package implements PCR and several algorithms for PLSR. The design is modular, so that it should be easy to use the underlying algorithms in other functions. It is our hope that the package will serve well both for interactive data analysis and as a building block for other functions or packages using PLSR or PCR. We will here describe the package and how it is used for data analysis, as well as how it can be used as a part of other packages. Also included is a section about formulas and data frames, for people not used to the R modelling idioms.", "title": "" } ]
[ { "docid": "2b38ac7d46a1b3555fef49a4e02cac39", "text": "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.", "title": "" }, { "docid": "fe1697301e7480ae255aa4d9f60b1040", "text": "Background and aim\nType 2 diabetes mellitus (T2DM) is one of the major diseases confronting the health care systems. In diabetes mellitus (DM), combined use of oral hypoglycemic medications has been shown to be more effective than metformin (Met) alone in glycemic control. This study determined the effects of Ginkgo biloba (GKB) extract as an adjuvant to Met in patients with uncontrolled T2DM.\n\n\nSubjects and methods\nSixty T2DM patients were recruited in a randomized, placebo-controlled, double-blinded, and multicenter trial. The patients, currently using Met, were randomly grouped into those treated with either GKB extract (120 mg/day) or placebo (starch, 120 mg/day) for 90 days. Blood glycated hemoglobin (HbA1c), fasting serum glucose, serum insulin, body mass index (BMI), waist circumference (WC), insulin resistance, and visceral adiposity index (VAI) were determined before (baseline) and after 90 days of GKB extract treatment.\n\n\nResults\nGKB extract significantly decreased blood HbA1c (7.7%±1.2% vs baseline 8.6%±1.6%, P<0.001), fasting serum glucose (154.7±36.1 mg/dL vs baseline 194.4±66.1 mg/dL, P<0.001) and insulin (13.4±7.8 μU/mL vs baseline 18.5±8.9 μU/mL, P=0.006) levels, BMI (31.6±5.1 kg/m2 vs baseline 34.0±6.0 kg/m2, P<0.001), waist WC (102.6±10.5 cm vs baseline 106.0±10.9 cm, P<0.001), and VAI (158.9±67.2 vs baseline 192.0±86.2, P=0.007). GKB extract did not negatively impact the liver, kidney, or hematopoietic functions.\n\n\nConclusion\nGKB extract as an adjuvant was effective in improving Met treatment outcomes in T2DM patients. Thus, it is suggested that GKB extract is an effective dietary supplement for the control of DM in humans.", "title": "" }, { "docid": "246bbb92bc968d20866b8c92a10f8ac7", "text": "This survey paper provides an overview of content-based music information retrieval systems, both for audio and for symbolic music notation. Matching algorithms and indexing methods are briefly presented. The need for a TREC-like comparison of matching algorithms such as MIREX at ISMIR becomes clear from the high number of quite different methods which so far only have been used on different data collections. We placed the systems on a map showing the tasks and users for which they are suitable, and we find that existing content-based retrieval systems fail to cover a gap between the very general and the very specific retrieval tasks.", "title": "" }, { "docid": "406e6a8966aa43e7538030f844d6c2f0", "text": "The idea of developing software components was envisioned more than forty years ago. In the past two decades, Component-Based Software Engineering (CBSE) has emerged as a distinguishable approach in software engineering, and it has attracted the attention of many researchers, which has led to many results being published in the research literature. There is a huge amount of knowledge encapsulated in conferences and journals targeting this area, but a systematic analysis of that knowledge is missing. For this reason, we aim to investigate the state-of-the-art of the CBSE area through a detailed literature review. To do this, 1231 studies dating from 1984 to 2012 were analyzed. Using the available evidence, this paper addresses five dimensions of CBSE: main objectives, research topics, application domains, research intensity and applied research methods. The main objectives found were to increase productivity, save costs and improve quality. The most addressed application domains are homogeneously divided between commercial-off-the-shelf (COTS), distributed and embedded systems. Intensity of research showed a considerable increase in the last fourteen years. In addition to the analysis, this paper also synthesizes the available evidence, identifies open issues and points out areas that call for further research. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "5071eba5a173fdd496b41f2c8d24e028", "text": "We survey four variants of RSA designed to speed up RSA decryption and signing. We only consider variants that are backwards compatible in the sense that a system using one of these variants can interoperate with systems using standard RSA.", "title": "" }, { "docid": "d1a9ac5a11d1f9fbd9b9ee24a199cb70", "text": "In this paper, we proposed a new robust twin support vector machine (called R-TWSVM) via second order cone programming formulations for classification, which can deal with data with measurement noise efficiently. Preliminary experiments confirm the robustness of the proposed method and its superiority to the traditional robust SVM in both computation time and classification accuracy. Remarkably, since there are only inner products about inputs in our dual problems, this makes us apply kernel trick directly for nonlinear cases. Simultaneously we does not need to solve the extra inverse of matrices, which is totally different with existing TWSVMs. In addition, we also show that the TWSVMs are the special case of our robust model and simultaneously give a new dual form of TWSVM by degenerating R-TWSVM, which successfully overcomes the existing shortcomings of TWSVM. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f282c9ff4afa773af39eb963f4987d09", "text": "The fast development of computing and communication has reformed the financial markets' dynamics. Nowadays many people are investing and trading stocks through online channels and having access to real-time market information efficiently. There are more opportunities to lose or make money with all the stocks information available throughout the World; however, one should spend a lot of effort and time to follow those stocks and the available instant information. This paper presents a preliminary regarding a multi-agent recommender system for computational investing. This system utilizes a hybrid filtering technique to adaptively recommend the most profitable stocks at the right time according to investor's personal favour. The hybrid technique includes collaborative and content-based filtering. The content-based model uses investor preferences, influencing macro-economic factors, stocks profiles and the predicted trend to tailor to its advices. The collaborative filter assesses the investor pairs' investing behaviours and actions that are proficient in economic market to recommend the similar ones to the target investor.", "title": "" }, { "docid": "92551f47dc9e17e4eeedaa94e98fd1dd", "text": "This 1.2 /spl mu/m, 33 mW analog-to-digital converter (ADC) demonstrates a family of power reduction techniques including a commutated feedback capacitor switching (CFCS), sharing of the second stage of an op amp between adjacent stages of a pipeline, reusing the first stage of an op amp as the comparator pre-amp, and exploiting parasitic capacitance as common-mode feedback capacitors.", "title": "" }, { "docid": "40e0d6e93c426107cbefbdf3d4ca85b9", "text": "H.264/MPEG-4 AVC is the latest international video coding standard. It was jointly developed by the Video Coding Experts Group (VCEG) of the ITU-T and the Moving Picture Experts Group (MPEG) of ISO/IEC. It uses state-of-the-art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, video conferencing, TV, storage (DVD and/or hard disk based, especially high-definition DVD), streaming video, digital video authoring, digital cinema, and many others. The work on a new set of extensions to this standard has recently been completed. These extensions, known as the Fidelity Range Extensions (FRExt), provide a number of enhanced capabilities relative to the base specification as approved in the Spring of 2003. In this paper, an overview of this standard is provided, including the highlights of the capabilities of the new FRExt features. Some comparisons with the existing MPEG-2 and MPEG-4 Part 2 standards are also provided.", "title": "" }, { "docid": "702d38b3ddfd2d0a2f506acbad561f63", "text": "Interactive theorem provers have been used extensively to reason about various software/hardware systems and mathematical theorems. The key challenge when using an interactive prover is finding a suitable sequence of proof steps that will lead to a successful proof requires a significant amount of human intervention. This paper presents an automated technique that takes as input examples of successful proofs and infers an Extended Finite State Machine as output. This can in turn be used to generate proofs of new conjectures. Our preliminary experiments show that the inferred models are generally accurate (contain few false-positive sequences) and that representing existing proofs in such a way can be very useful when guiding new ones.", "title": "" }, { "docid": "ce2b354fee0d2d895d8af2c6642919fa", "text": "This paper presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) and data independence (unsupervised criterion). Classification accuracy is used as a metric to evaluate the performance of the method. By minimizing the structural risk, projection originated from the decision boundaries directly improves the classification performance from a supervised perspective. From an unsupervised perspective, projection can also be obtained based on maximum independence among features (or attributes) in data to indirectly achieve better classification accuracy over more intrinsic representation of the data. Orthogonality interrelates the two sets of projections such that minimum redundancy exists between the projections, leading to more effective dimensionality reduction. Experimental results show that the proposed hybrid dimensionality reduction method that satisfies both criteria simultaneously provides higher classification performance, especially for noisy data sets, in relatively lower dimensional space than various existing methods.", "title": "" }, { "docid": "4c64fb50bc70532d9a0ba4b6847525ed", "text": "An 18-GHz range frequency synthesizer is implemented in 0.13-mum SiGe BiCMOS technology as part of a 60-GHz superheterodyne transceiver chipset. It provides for RF channels of 56.5-64 GHz in 500-MHz steps, and features a phase-rotating multi-modulus divider capable of sub-integer division. Output frequency range from the synthesizer is 16.0 to 18.8 GHz, while the enabled RF frequency range is 3.5 times this, or 55.8 to 65.8 GHz. The measured RMS phase noise of the synthesizer is 0.8deg (1 MHz to 1 GHz integration), while phase noise at 100-kHz and 10-MHz offsets are -90 and -124 dBc/Hz, respectively. Reference spurs are 69 dBc; sub-integer spurs are -65 dBc; and combined power consumption from 1.2 and 2.7 V is 144 mW.", "title": "" }, { "docid": "46db4cfa5ccb08da3ca884ad794dc419", "text": "Mutation testing of Python programs raises a problem of incompetent mutants. Incompetent mutants cause execution errors due to inconsistency of types that cannot be resolved before run-time. We present a practical approach in which incompetent mutants can be generated, but the solution is transparent for a user and incompetent mutants are detected by a mutation system during test execution. Experiments with 20 traditional and object-oriented operators confirmed that the overhead can be accepted. The paper presents an experimental evaluation of the first- and higher-order mutation. Four algorithms to the 2nd and 3rd order mutant generation were applied. The impact of code coverage consideration on the process efficiency is discussed. The experiments were supported by the MutPy system for mutation testing of Python programs.", "title": "" }, { "docid": "39490ce3446ac22bdc6042a3a38bc5ee", "text": "The ultimate goal of an information provider is to satisfy the user information needs. That is, to provide the user with the right information, at the right time, through the right means. A prerequisite for developing personalised services is to rely on user profiles representing users’ information needs. In this paper we will first address the issue of presenting a general user profile model. Then, the general user profile model will be customised for digital libraries users.", "title": "" }, { "docid": "fe4428aa7ae69111bb55d45c2941566e", "text": "In this paper, we determine ordering quantity and reorder point for aircraft consumable spare parts. We use continuous review model to propose a spare part inventory policy that can be used in a aircraft maintenance company in Indonesia. We employ ABC classification system to categorize the spare parts based on their dollar contribution. We focus our research on managing the inventory level for spare parts on class A and B which commonly known as important classes. The result from the research indicates that the continuous review policy gives a significant amount of saving compared to an existing policy used by the company.", "title": "" }, { "docid": "69d16861f969b2aaaa6658a754268786", "text": "In this paper, we introduce a bilinear composition loss function to address the problem of image dehazing. Previous methods in image dehazing use a two-stage approach which first estimate the transmission map followed by clear image estimation. The drawback of a two-stage method is that it tends to boost local image artifacts such as noise, aliasing and blocking. This is especially the case for heavy haze images captured with a low quality device. Our method is based on convolutional neural networks. Unique in our method is the bilinear composition loss function which directly model the correlations between transmission map, clear image, and atmospheric light. This allows errors to be back-propagated to each sub-network concurrently, while maintaining the composition constraint to avoid overfitting of each sub-network. We evaluate the effectiveness of our proposed method using both synthetic and real world examples. Extensive experiments show that our method outperfoms state-of-the-art methods especially for haze images with severe noise level and compressions.", "title": "" }, { "docid": "bd700aba43a8a8de5615aa1b9ca595a7", "text": "Cloud computing has formed the conceptual and infrastructural basis for tomorrow’s computing. The global computing infrastructure is rapidly moving towards cloud based architecture. While it is important to take advantages of could based computing by means of deploying it in diversified sectors, the security aspects in a cloud based computing environment remains at the core of interest. Cloud based services and service providers are being evolved which has resulted in a new business trend based on cloud technology. With the introduction of numerous cloud based services and geographically dispersed cloud service providers, sensitive information of different entities are normally stored in remote servers and locations with the possibilities of being exposed to unwanted parties in situations where the cloud servers storing those information are compromised. If security is not robust and consistent, the flexibility and advantages that cloud computing has to offer will have little credibility. This paper presents a review on the cloud computing concepts as well as security issues inherent within the context of cloud computing and cloud", "title": "" }, { "docid": "347ffb664378b56a5ae3a45d1251d7b7", "text": "We present Essentia 2.0, an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPL license. It contains an extensive collection of reusable algorithms which implement audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal and high-level music descriptors. The library is also wrapped in Python and includes a number of predefined executable extractors for the available music descriptors, which facilitates its use for fast prototyping and allows setting up research experiments very rapidly. Furthermore, it includes a Vamp plugin to be used with Sonic Visualiser for visualization purposes. The library is cross-platform and currently supports Linux, Mac OS X, and Windows systems. Essentia is designed with a focus on the robustness of the provided music descriptors and is optimized in terms of the computational cost of the algorithms. The provided functionality, specifically the music descriptors included in-the-box and signal processing algorithms, is easily expandable and allows for both research experiments and development of large-scale industrial applications.", "title": "" }, { "docid": "a29a51df4eddfa0239903986f4011532", "text": "In recent years additive manufacturing, or threedimensional (3D) printing, it is becoming increasingly widespread and used also in the medical and biomedical field [1]. 3D printing is a technology that allows to print, in plastic or other material, solid objects of any shape from its digital model. The printing process takes place by overlapping layers of material corresponding to cross sections of the final product. The 3D models can be created de novo, with a 3D modeling software, or it is possible to replicate an existing object with the use of a 3D scanner. In the past years, the development of appropriate software packages allowed to generate 3D printable anatomical models from computerized tomography, magnetic resonance imaging and ultrasound scans [2,3]. Up to now there have been 3D printed objects of nearly any size (from nanostructures to buildings) and material. Plastics, metals, ceramics, graphene and even derivatives of human tissues. The so-called “bio-printers”, in fact, allow to print one above the other thin layers of cells immersed in a gelatinous matrix. Recent advances of 3D bioprinting enabled researchers to print biocompatible scaffolds and human tissues such as skin, bone, cartilage, vessels and are driving to the design and 3D printing of artificial organs like liver and kidney [4]. Dentistry, prosthetics, craniofacial reconstructive surgery, neurosurgery and orthopedic surgery are among the disciplines that have already shown versatility and possible applications of 3D printing in adults and children [2,5]. Only a few experiences have instead been reported in newborn and infants. 3D printed individualized bioresorbable airway splints have been used for the treatment of three infants with severe tracheobronchomalacia, ensuring resolution of pulmonary and extrapulmonary symptoms [6,7]. A 3D model of a complex congenital heart defects have been used for preoperative planning of intraoperative procedures, allowing surgeons to repair a complex defect in a single intervention [8]. As already shown for children with obstructive sleep apnea and craniofacial anomalies [9]. personalized 3D printed masks could improve CPAP effectiveness and comfort also in term and preterm neonates. Neonatal emergency transport services and rural hospitals could also benefit from this technology, making possible to print medical devices spare parts, surgical and medical instruments wherever not readily available. It is envisaged that 3D printing, in the next future, will give its contribute toward the individualization of neonatal care, although further multidisciplinary studies are still needed to evaluate safety, possible applications and realize its full potential.", "title": "" }, { "docid": "378c3b785db68bd5efdf1ad026c901ea", "text": "Intrinsically switched tunable filters are switched on and off using the tuning elements that tune their center frequencies and/or bandwidths, without requiring an increase in the tuning range of the tuning elements. Because external RF switches are not needed, substantial improvements in insertion loss, linearity, dc power consumption, control complexity, size, and weight are possible compared to conventional approaches. An intrinsically switched varactor-tuned bandstop filter and bandpass filter bank are demonstrated here for the first time. The intrinsically switched bandstop filter prototype has a second-order notch response with more than 50 dB of rejection continuously tunable from 665 to 1000 MHz (50%) with negligible passband ripple in the intrinsic off state. The intrinsically switched tunable bandpass filter bank prototype, comprised of three third-order bandpass filters, has a constant 50-MHz bandwidth response continuously tunable from 740 to 1644 MHz (122%) with less than 5 dB of passband insertion loss and more than 40 dB of isolation between bands.", "title": "" } ]
scidocsrr
839740a1ad696b4703f9eff52b5afefb
Design of Power and Area Efficient Approximate Multipliers
[ { "docid": "a10752bb80ad47e18ef7dbcd83d49ff7", "text": "Approximate computing has gained significant attention due to the popularity of multimedia applications. In this paper, we propose a novel inaccurate 4:2 counter that can effectively reduce the partial product stages of the Wallace Multiplier. Compared to the normal Wallace multiplier, our proposed multiplier can reduce 10.74% of power consumption and 9.8% of delay on average, with an error rate from 0.2% to 13.76% The accuracy of amplitude is higher than 99% In addition, we further enhance the design with error-correction units to provide accurate results. The experimental results show that the extra power consumption of correct units is lower than 6% on average. Compared to the normal Wallace multiplier, the average latency of our proposed multiplier with EDC is 6% faster when the bit-width is 32, and the power consumption is still 10% lower than that of the Wallace multiplier.", "title": "" }, { "docid": "962ab9e871dc06c3cd290787dc7e71aa", "text": "The conventional digital hardware computational blocks with different structures are designed to compute the precise results of the assigned calculations. The main contribution of our proposed Bio-inspired Imprecise Computational blocks (BICs) is that they are designed to provide an applicable estimation of the result instead of its precise value at a lower cost. These novel structures are more efficient in terms of area, speed, and power consumption with respect to their precise rivals. Complete descriptions of sample BIC adder and multiplier structures as well as their error behaviors and synthesis results are introduced in this paper. It is then shown that these BIC structures can be exploited to efficiently implement a three-layer face recognition neural network and the hardware defuzzification block of a fuzzy processor.", "title": "" } ]
[ { "docid": "1349c5daedd71bdfccaa0ea48b3fd54a", "text": "OBJECTIVE\nCraniosacral therapy (CST) is an alternative treatment approach, aiming to release restrictions around the spinal cord and brain and subsequently restore body function. A previously conducted systematic review did not obtain valid scientific evidence that CST was beneficial to patients. The aim of this review was to identify and critically evaluate the available literature regarding CST and to determine the clinical benefit of CST in the treatment of patients with a variety of clinical conditions.\n\n\nMETHODS\nComputerised literature searches were performed in Embase/Medline, Medline(®) In-Process, The Cochrane library, CINAHL, and AMED from database start to April 2011. Studies were identified according to pre-defined eligibility criteria. This included studies describing observational or randomised controlled trials (RCTs) in which CST as the only treatment method was used, and studies published in the English language. The methodological quality of the trials was assessed using the Downs and Black checklist.\n\n\nRESULTS\nOnly seven studies met the inclusion criteria, of which three studies were RCTs and four were of observational study design. Positive clinical outcomes were reported for pain reduction and improvement in general well-being of patients. Methodological Downs and Black quality scores ranged from 2 to 22 points out of a theoretical maximum of 27 points, with RCTs showing the highest overall scores.\n\n\nCONCLUSION\nThis review revealed the paucity of CST research in patients with different clinical pathologies. CST assessment is feasible in RCTs and has the potential of providing valuable outcomes to further support clinical decision making. However, due to the current moderate methodological quality of the included studies, further research is needed.", "title": "" }, { "docid": "6df12ee53551f4a3bd03bca4ca545bf1", "text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.", "title": "" }, { "docid": "cc8766fc94cf9865c9035c7b3d3ce4a6", "text": "Image features known as “gist descriptors” have recently been applied to the malware classification problem. In this research, we implement, test, and analyze a malware score based on gist descriptors, and verify that the resulting score yields very strong classification results. We also analyze the robustness of this gist-based scoring technique when applied to obfuscated malware, and we perform feature reduction to determine a minimal set of gist features. Then we compare the effectiveness of a deep learning technique to this gist-based approach. While scoring based on gist descriptors is effective, we show that our deep learning technique performs equally well. A potential advantage of the deep learning approach is that there is no need to extract the gist features when training or scoring.", "title": "" }, { "docid": "609c3a75308eb951079373feb88432ae", "text": "We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets. DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie one from Wikipedia and the other from IMDb written by two different authors. We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize answers from the other version. This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version. Further, since the two versions have different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating external background knowledge. Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences. Indeed, we observe that state-ofthe-art neural RC models which have achieved near human performance on the SQuAD dataset (Rajpurkar et al., 2016b), even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research avenues wherein DuoRC could complement other RC datasets to explore novel neural approaches for studying language understanding.", "title": "" }, { "docid": "3ff01763def34800cf8afb9fc5fa9c83", "text": "The emerging machine learning technique called support vector machines is proposed as a method for performing nonlinear equalization in communication systems. The support vector machine has the advantage that a smaller number of parameters for the model can be identified in a manner that does not require the extent of prior information or heuristic assumptions that some previous techniques require. Furthermore, the optimization method of a support vector machine is quadratic programming, which is a well-studied and understood mathematical programming technique. Support vector machine simulations are carried out on nonlinear problems previously studied by other researchers using neural networks. This allows initial comparison against other techniques to determine the feasibility of using the proposed method for nonlinear detection. Results show that support vector machines perform as well as neural networks on the nonlinear problems investigated. A method is then proposed to introduce decision feedback processing to support vector machines to address the fact that intersymbol interference (ISI) data generates input vectors having temporal correlation, whereas a standard support vector machine assumes independent input vectors. Presenting the problem from the viewpoint of the pattern space illustrates the utility of a bank of support vector machines. This approach yields a nonlinear processing method that is somewhat different than the nonlinear decision feedback method whereby the linear feedback filter of the decision feedback equalizer is replaced by a Volterra filter. A simulation using a linear system shows that the proposed method performs equally to a conventional decision feedback equalizer for this problem.", "title": "" }, { "docid": "ba5b796721787105e48ad2794cfc11cc", "text": "Real world applications of machine learning in natural language processing can span many different domains and usually require a huge effort for the annotation of domain specific training data. For this reason, domain adaptation techniques have gained a lot of attention in the last years. In order to derive an effective domain adaptation, a good feature representation across domains is crucial as well as the generalisation ability of the predictive model. In this paper we address the problem of domain adaptation for sentiment classification by combining deep learning, for acquiring a cross-domain high-level feature representation, and ensemble methods, for reducing the cross-domain generalization error. The proposed adaptation framework has been evaluated on a benchmark dataset composed of reviews of four different Amazon category of products, significantly outperforming the state of the art methods.", "title": "" }, { "docid": "b51d531c2ff106124f96a4287e466b90", "text": "Detecting buildings from very high resolution (VHR) aerial and satellite images is extremely useful in map making, urban planning, and land use analysis. Although it is possible to manually locate buildings from these VHR images, this operation may not be robust and fast. Therefore, automated systems to detect buildings from VHR aerial and satellite images are needed. Unfortunately, such systems must cope with major problems. First, buildings have diverse characteristics, and their appearance (illumination, viewing angle, etc.) is uncontrolled in these images. Second, buildings in urban areas are generally dense and complex. It is hard to detect separate buildings from them. To overcome these difficulties, we propose a novel building detection method using local feature vectors and a probabilistic framework. We first introduce four different local feature vector extraction methods. Extracted local feature vectors serve as observations of the probability density function (pdf) to be estimated. Using a variable-kernel density estimation method, we estimate the corresponding pdf. In other words, we represent building locations (to be detected) in the image as joint random variables and estimate their pdf. Using the modes of the estimated density, as well as other probabilistic properties, we detect building locations in the image. We also introduce data and decision fusion methods based on our probabilistic framework to detect building locations. We pick certain crops of VHR panchromatic aerial and Ikonos satellite images to test our method. We assume that these crops are detected using our previous urban region detection method. Our test images are acquired by two different sensors, and they have different spatial resolutions. Also, buildings in these images have diverse characteristics. Therefore, we can test our methods on a diverse data set. Extensive tests indicate that our method can be used to automatically detect buildings in a robust and fast manner in Ikonos satellite and our aerial images.", "title": "" }, { "docid": "2399e1ffd634417f00273993ad0ba466", "text": "Requirements prioritization aims at identifying the most important requirements for a software system, a crucial step when planning for system releases and deciding which requirements to implement in each release. Several prioritization methods and supporting tools have been proposed so far. How to evaluate their properties, with the aim of supporting the selection of the most appropriate method for a specific project, is considered a relevant question. In this paper, we present an empirical study aiming at evaluating two state-of-the art tool-supported requirements prioritization methods, AHP and CBRank. We focus on three measures: the ease of use, the time-consumption and the accuracy. The experiment has been conducted with 23 experienced subjects on a set of 20 requirements from a real project. Results indicate that for the first two characteristics CBRank overcomes AHP, while for the accuracy AHP performs better than CBRank, even if the resulting ranks from the two methods are very similar. The majority of the users found CBRank the ‘‘overall best”", "title": "" }, { "docid": "3fd747a983ef1a0e5eff117b8765d4b3", "text": "We study centrality in urban street patterns of different world cities represented as networks in geographical space. The results indicate that a spatial analysis based on a set of four centrality indices allows an extended visualization and characterization of the city structure. A hierarchical clustering analysis based on the distributions of centrality has a certain capacity to distinguish different classes of cities. In particular, self-organized cities exhibit scale-free properties similar to those found in nonspatial networks, while planned cities do not.", "title": "" }, { "docid": "45ea8e1e27f6c687d957af561aca5188", "text": "Impedance matching networks for nonlinear devices such as amplifiers and rectifiers are normally very challenging to design, particularly for broadband and multiband devices. A novel design concept for a broadband high-efficiency rectenna without using matching networks is presented in this paper for the first time. An off-center-fed dipole antenna with relatively high input impedance over a wide frequency band is proposed. The antenna impedance can be tuned to the desired value and directly provides a complex conjugate match to the impedance of a rectifier. The received RF power by the antenna can be delivered to the rectifier efficiently without using impedance matching networks; thus, the proposed rectenna is of a simple structure, low cost, and compact size. In addition, the rectenna can work well under different operating conditions and using different types of rectifying diodes. A rectenna has been designed and made based on this concept. The measured results show that the rectenna is of high power conversion efficiency (more than 60%) in two wide bands, which are 0.9–1.1 and 1.8–2.5 GHz, for mobile, Wi-Fi, and ISM bands. Moreover, by using different diodes, the rectenna can maintain its wide bandwidth and high efficiency over a wide range of input power levels (from 0 to 23 dBm) and load values (from 200 to 2000 Ω). It is, therefore, suitable for high-efficiency wireless power transfer or energy harvesting applications. The proposed rectenna is general and simple in structure without the need for a matching network hence is of great significance for many applications.", "title": "" }, { "docid": "24e73ff615bb27e3f8f16746f496b689", "text": "A physically-based computational technique was investigated which is intended to estimate an initial guess for complex values of the wavenumber of a disturbance leading to the solution of the fourth-order Orr–Sommerfeld (O–S) equation. The complex wavenumbers, or eigenvalues, were associated with the stability characteristics of a semi-infinite shear flow represented by a hyperbolic-tangent function. This study was devoted to the examination of unstable flow assuming a spatially growing disturbance and is predicated on the fact that flow instability is correlated with elevated levels of perturbation kinetic energy per unit mass. A MATLAB computer program was developed such that the computational domain was selected to be in quadrant IV, where the real part of the wavenumber is positive and the imaginary part is negative to establish the conditions for unstable flow. For a given Reynolds number and disturbance wave speed, the perturbation kinetic energy per unit mass was computed at various node points in the selected subdomain of the complex plane. The initial guess for the complex wavenumber to start the solution process was assumed to be associated with the highest calculated perturbation kinetic energy per unit mass. Once the initial guess had been approximated, it was used to obtain the solution to the O–S equation by performing a Runge–Kutta integration scheme that computationally marched from the far field region in the shear layer down to the lower solid boundary. Results compared favorably with the stability characteristics obtained from an earlier study for semi-infinite Blasius flow over a flat boundary. © 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c6725a67f1fa2b091e0bbf980e6260be", "text": "This paper examines job satisfaction and employees’ turnover intentions in Total Nigeria PLC in Lagos State. The paper highlights and defines basic concepts of job satisfaction and employees’ turnover intention. It specifically considered satisfaction with pay, nature of work and supervision as the three facets of job satisfaction that affect employee turnover intention. To achieve this objective, authors adopted a survey method by administration of questionnaires, conducting interview and by reviewing archival documents as well as review of relevant journals and textbooks in this field of learning as means of data collection. Four (4) major hypotheses were derived from literature and respective null hypotheses tested at .05 level of significance It was found that specifically job satisfaction reduces employees’ turnover intention and that Total Nigeria PLC adopts standard pay structure, conducive nature of work and efficient supervision not only as strategies to reduce employees’ turnover but also as the company retention strategy.", "title": "" }, { "docid": "5350ffea7a4187f0df11fd71562aba43", "text": "The presence of buried landmines is a serious threat in many areas around the World. Despite various techniques have been proposed in the literature to detect and recognize buried objects, automatic and easy to use systems providing accurate performance are still under research. Given the incredible results achieved by deep learning in many detection tasks, in this paper we propose a pipeline for buried landmine detection based on convolutional neural networks (CNNs) applied to ground-penetrating radar (GPR) images. The proposed algorithm is capable of recognizing whether a B-scan profile obtained from GPR acquisitions contains traces of buried mines. Validation of the presented system is carried out on real GPR acquisitions, albeit system training can be performed simply relying on synthetically generated data. Results show that it is possible to reach 95% of detection accuracy without training in real acquisition of landmine profiles.", "title": "" }, { "docid": "bb5c4d59f598427ea1e2946ae74a7cc8", "text": "In a nutshell: This course comprehensively covers important user experience (UX) evaluation methods as well as opportunities and challenges of UX evaluation in the area of entertainment and games. The course is an ideal forum for attendees to gain insight into state-of-the art user experience evaluation methods going way-beyond standard usability and user experience evaluation approaches in the area of human-computer interaction. It surveys and assesses the efforts of user experience evaluation of the gaming and human computer interaction communities during the last 15 years.", "title": "" }, { "docid": "69e4bb63a9041b3c95fba1a903bc0e5c", "text": "Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, computer science, and electrical engineering. It surprisingly predicts that high-dimensional signals, which allow a sparse representation by a suitable basis or, more generally, a frame, can be recovered from what was previously considered highly incomplete linear measurements by using efficient algorithms. This article shall serve as an introduction to and a survey about compressed sensing.", "title": "" }, { "docid": "27b3cd45e0bdb279a5aa5f1f082ea850", "text": "Tensors (also called multiway arrays) are a generalization of vectors and matrices to higher dimensions based on multilinear algebra. The development of theory and algorithms for tensor decompositions (factorizations) has been an active area of study within the past decade, e.g., [1] and [2]. These methods have been successfully applied to many problems in unsupervised learning and exploratory data analysis. Multiway analysis enables one to effectively capture the multilinear structure of the data, which is usually available as a priori information about the data. Hence, it might provide advantages over matrix factorizations by enabling one to more effectively use the underlying structure of the data. Besides unsupervised tensor decompositions, supervised tensor subspace regression and classification formulations have been also successfully applied to a variety of fields including chemometrics, signal processing, computer vision, and neuroscience.", "title": "" }, { "docid": "5301c9ab75519143c5657b9fa780cfcb", "text": "Although discriminatively trained classifiers are usually more accurate when labeled training data is abundant, previous work has sh own that when training data is limited, generative classifiers can ou t-perform them. This paper describes a hybrid model in which a high-dim ensional subset of the parameters are trained to maximize generative likelihood, and another, small, subset of parameters are discriminativ ely trained to maximize conditional likelihood. We give a sample complexi ty bound showing that in order to fit the discriminative parameters we ll, the number of training examples required depends only on the logari thm of the number of feature occurrences and feature set size. Experim ental results show that hybrid models can provide lower test error and can p roduce better accuracy/coverage curves than either their purely g nerative or purely discriminative counterparts. We also discuss sever al advantages of hybrid models, and advocate further work in this area.", "title": "" }, { "docid": "b876e62db8a45ab17d3a9d217e223eb7", "text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.", "title": "" }, { "docid": "3aaffdda034c762ad36954386d796fb9", "text": "KNTU CDRPM is a cable driven redundant parallel manipulator, which is under investigation for possible high speed and large workspace applications. This newly developed mechanisms have several advantages compared to the conventional parallel mechanisms. Its rotational motion range is relatively large, its redundancy improves safety for failure in cables, and its design is suitable for long-time high acceleration motions. In this paper, collision-free workspace of the manipulator is derived by applying fast geometrical intersection detection method, which can be used for any fully parallel manipulator. Implementation of the algorithm on the Neuron design of the KNTU CDRPM leads to significant results, which introduce a new style of design of a spatial cable-driven parallel manipulators. The results are elaborated in three presentations; constant-orientation workspace, total orientation workspace and orientation workspace.", "title": "" }, { "docid": "503756888df43d745e4fb5051f8855fb", "text": "The widespread use of email has raised serious privacy concerns. A critical issue is how to prevent email information leaks, i.e., when a message is accidentally addressed to non-desired recipients. This is an increasingly common problem that can severely harm individuals and corporations — for instance, a single email leak can potentially cause expensive law suits, brand reputation damage, negotiation setbacks and severe financial losses. In this paper we present the first attempt to solve this problem. We begin by redefining it as an outlier detection task, where the unintended recipients are the outliers. Then we combine real email examples (from the Enron Corpus) with carefully simulated leak-recipients to learn textual and network patterns associated with email leaks. This method was able to detect email leaks in almost 82% of the test cases, significantly outperforming all other baselines. More importantly, in a separate set of experiments we applied the proposed method to the task of finding real cases of email leaks. The result was encouraging: a variation of the proposed technique was consistently successful in finding two real cases of email leaks. Not only does this paper introduce the important problem of email leak detection, but also presents an effective solution that can be easily implemented in any email client — with no changes in the email server side.", "title": "" } ]
scidocsrr
58290864dd532a48b4558668cb8b6eda
Gremlin: Systematic Resilience Testing of Microservices
[ { "docid": "35260e253551bcfd21ce6d08c707f092", "text": "Current debugging and optimization methods scale poorly to deal with the complexity of modern Internet services, in which a single request triggers parallel execution of numerous heterogeneous software components over a distributed set of computers. The Achilles’ heel of current methods is the need for a complete and accurate model of the system under observation: producing such a model is challenging because it requires either assimilating the collective knowledge of hundreds of programmers responsible for the individual components or restricting the ways in which components interact. Fortunately, the scale of modern Internet services offers a compensating benefit: the sheer volume of requests serviced means that, even at low sampling rates, one can gather a tremendous amount of empirical performance observations and apply “big data” techniques to analyze those observations. In this paper, we show how one can automatically construct a model of request execution from pre-existing component logs by generating a large number of potential hypotheses about program behavior and rejecting hypotheses contradicted by the empirical observations. We also show how one can validate potential performance improvements without costly implementation effort by leveraging the variation in component behavior that arises naturally over large numbers of requests to measure the impact of optimizing individual components or changing scheduling behavior. We validate our methodology by analyzing performance traces of over 1.3 million requests to Facebook servers. We present a detailed study of the factors that affect the end-to-end latency of such requests. We also use our methodology to suggest and validate a scheduling optimization for improving Facebook request latency.", "title": "" } ]
[ { "docid": "91c5ad5a327026a424454779f96da601", "text": "We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.", "title": "" }, { "docid": "bdf8d4a8862aad3631f5def11b13b101", "text": "We examine the relationship between children's kindergarten attention skills and developmental patterns of classroom engagement throughout elementary school in disadvantaged urban neighbourhoods. Kindergarten measures include teacher ratings of classroom behavior, direct assessments of number knowledge and receptive vocabulary, and parent-reported family characteristics. From grades 1 through 6, teachers also rated children's classroom engagement. Semi-parametric mixture modeling generated three distinct trajectories of classroom engagement (n = 1369, 50% boys). Higher levels of kindergarten attention were proportionately associated with greater chances of belonging to better classroom engagement trajectories compared to the lowest classroom engagement trajectory. In fact, improvements in kindergarten attention reliably increased the likelihood of belonging to more productive classroom engagement trajectories throughout elementary school, above and beyond confounding child and family factors. Measuring the development of classroom productivity is pertinent because such dispositions represent precursors to mental health, task-orientation, and persistence in high school and workplace behavior in adulthood.", "title": "" }, { "docid": "846ae985f61a0dcdb1ff3a2226c1b41a", "text": "OBJECTIVE\nThis article provides an overview of tactile displays. Its goal is to assist human factors practitioners in deciding when and how to employ the sense of touch for the purpose of information representation. The article also identifies important research needs in this area.\n\n\nBACKGROUND\nFirst attempts to utilize the sense of touch as a medium for communication date back to the late 1950s. For the next 35 years progress in this area was relatively slow, but recent years have seen a surge in the interest and development of tactile displays and the integration of tactile signals in multimodal interfaces. A thorough understanding of the properties of this sensory channel and its interaction with other modalities is needed to ensure the effective and robust use of tactile displays.\n\n\nMETHODS\nFirst, an overview of vibrotactile perception is provided. Next, the design of tactile displays is discussed with respect to available technologies. The potential benefit of including tactile cues in multimodal interfaces is discussed. Finally, research needs in the area of tactile information presentation are highlighted.\n\n\nRESULTS\nThis review provides human factors researchers and interface designers with the requisite knowledge for creating effective tactile interfaces. It describes both potential benefits and limitations of this approach to information presentation.\n\n\nCONCLUSION\nThe sense of touch represents a promising means of supporting communication and coordination in human-human and human-machine systems.\n\n\nAPPLICATION\nTactile interfaces can support numerous functions, including spatial orientation and guidance, attention management, and sensory substitution, in a wide range of domains.", "title": "" }, { "docid": "dff035a6e773301bd13cd0b71d874861", "text": "Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skeletal data. A number of approaches have been proposed to extract representative features from 3D skeletal data, most commonly hard wired geometric or bio-inspired shape context features. We propose a hierarchial dynamic framework that first extracts high level skeletal joints features and then uses the learned representation for estimating emission probability to infer action sequences. Currently gaussian mixture models are the dominant technique for modeling the emission distribution of hidden Markov models. We show that better action recognition using skeletal features can be achieved by replacing gaussian mixture models by deep neural networks that contain many layers of features to predict probability distributions over states of hidden Markov models. The framework can be easily extended to include a ergodic state to segment and recognize actions simultaneously.", "title": "" }, { "docid": "fe97095f2af18806e7032176c6ac5d89", "text": "Targeted social engineering attacks in the form of spear phishing emails, are often the main gimmick used by attackers to infiltrate organizational networks and implant state-of-the-art Advanced Persistent Threats (APTs). Spear phishing is a complex targeted attack in which, an attacker harvests information about the victim prior to the attack. This information is then used to create sophisticated, genuine-looking attack vectors, drawing the victim to compromise confidential information. What makes spear phishing different, and more powerful than normal phishing, is this contextual information about the victim. Online social media services can be one such source for gathering vital information about an individual. In this paper, we characterize and examine a true positive dataset of spear phishing, spam, and normal phishing emails from Symantec's enterprise email scanning service. We then present a model to detect spear phishing emails sent to employees of 14 international organizations, by using social features extracted from LinkedIn. Our dataset consists of 4,742 targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack emails sent to 5,912 non victims; and publicly available information from their LinkedIn profiles. We applied various machine learning algorithms to this labeled data, and achieved an overall maximum accuracy of 97.76% in identifying spear phishing emails. We used a combination of social features from LinkedIn profiles, and stylometric features extracted from email subjects, bodies, and attachments. However, we achieved a slightly better accuracy of 98.28% without the social features. Our analysis revealed that social features extracted from LinkedIn do not help in identifying spear phishing emails. To the best of our knowledge, this is one of the first attempts to make use of a combination of stylometric features extracted from emails, and social features extracted from an online social network to detect targeted spear phishing emails.", "title": "" }, { "docid": "30aeb5f14438b03f7cdaee9783273d97", "text": "The status of English grammar teaching in English teaching has weakened and even once disappeared in part English class; until the late 1980s, foreign English teachers had a consistent view of the importance of grammar teaching. In recent years, more and more domestic scholars begin to think about the situation of China and explore the grammar teaching method. This article will review the explicit grammar instruction and implicit grammar teaching research, collect and analyze the integration of explicit grammar instruction and implicit grammar teaching strategy and its advantages in the grammar teaching.", "title": "" }, { "docid": "9bbd6a417b373fb19f691d1edc728a6c", "text": "The increasing advances in hardware technology for sensor processing and mobile technology has resulted in greater access and availability of sensor data from a wide variety of applications. For example, the commodity mobile devices contain a wide variety of sensors such as GPS, accelerometers, and other kinds of data. Many other kinds of technology such as RFID-enabled sensors also produce large volumes of data over time. This has lead to a need for principled methods for efficient sensor data processing. This chapter will provide an overview of the challenges of sensor data analytics and the different areas of research in this context. We will also present the organization of the chapters in this book in this context.", "title": "" }, { "docid": "fb1f3f300bcd48d99f0a553a709fdc89", "text": "This work includes a high step up voltage gain DC-DC converter for DC microgrid applications. The DC microgrid can be utilized for rural electrification, UPS support, Electronic lighting systems and Electrical vehicles. The whole system consists of a Photovoltaic panel (PV), High step up DC-DC converter with Maximum Power Point Tracking (MPPT) and DC microgrid. The entire system is optimized with both MPPT and converter separately. The MPP can be tracked by Incremental Conductance (IC) MPPT technique modified with D-Sweep (Duty ratio Sweep). D-sweep technique reduces the problem of multiple local maxima. Converter optimization includes a high step up DC-DC converter which comprises of both coupled inductor and switched capacitors. This increases the gain up to twenty times with high efficiency. Both converter optimization and MPPT optimization increases overall system efficiency. MATLAB/simulink model is implemented. Hardware of the system can be implemented by either voltage mode control or current mode control.", "title": "" }, { "docid": "9b0ddf08b06c625ea579d9cee6c8884b", "text": "A frequency-reconfigurable bow-tie antenna for Bluetooth, WiMAX, and WLAN applications is proposed. The bow-tie radiator is printed on two sides of the substrate and is fed by a microstripline continued by a pair of parallel strips. By embedding p-i-n diodes over the bow-tie arms, the effective electrical length of the antenna can be changed, leading to an electrically tunable operating band. The simple biasing circuit used in this design eliminates the need for extra bias lines, and thus avoids distortion of the radiation patterns. Measured results are in good agreement with simulations, which shows that the proposed antenna can be tuned to operate in either 2.2-2.53, 2.97-3.71, or 4.51-6 GHz band with similar radiation patterns.", "title": "" }, { "docid": "70789bc929ef7d36f9bb4a02793f38f5", "text": "Lock managers are among the most studied components in concurrency control and transactional systems. However, one question seems to have been generally overlooked: “When there are multiple lock requests on the same object, which one(s) should be granted first?” Nearly all existing systems rely on a FIFO (first in, first out) strategy to decide which transaction(s) to grant the lock to. In this paper, however, we show that the lock scheduling choices have significant ramifications on the overall performance of a transactional system. Despite the large body of research on job scheduling outside the database context, lock scheduling presents subtle but challenging requirements that render existing results on scheduling inapt for a transactional database. By carefully studying this problem, we present the concept of contention-aware scheduling, show the hardness of the problem, and propose novel lock scheduling algorithms (LDSF and bLDSF), which guarantee a constant factor approximation of the best scheduling. We conduct extensive experiments using a popular database on both TPC-C and a microbenchmark. Compared to FIFO— the default scheduler in most database systems—our bLDSF algorithm yields up to 300x speedup in overall transaction latency. Alternatively, our LDSF algorithm, which is simpler and achieves comparable performance to bLDSF, has already been adopted by open-source community, and was chosen as the default scheduling strategy in MySQL 8.0.3+. PVLDB Reference Format: Boyu Tian, Jiamin Huang, Barzan Mozafari, Grant Schoenebeck. Contention-Aware Lock Scheduling for Transactional Databases. PVLDB, 11 (5): xxxx-yyyy, 2018. DOI: 10.1145/3177732.3177740", "title": "" }, { "docid": "9a05c95de1484df50a5540b31df1a010", "text": "Resumen. Este trabajo trata sobre un sistema de monitoreo remoto a través de una pantalla inteligente para sensores de temperatura y corriente utilizando una red híbrida CAN−ZIGBEE. El CAN bus es usado como medio de transmisión de datos a corta distancia mientras que Zigbee es empleado para que cada nodo de la red pueda interactuar de manera inalámbrica con el nodo principal. De esta manera la red híbrida combina las ventajas de cada protocolo de comunicación para intercambiar datos. El sistema cuenta con cuatro nodos, dos son CAN y reciben la información de los sensores y el resto son Zigbee. Estos nodos están a cargo de transmitir la información de un nodo CAN de manera inalámbrica y desplegarla en una pantalla inteligente.", "title": "" }, { "docid": "6914ba1e0a6a60a9d8956f9b9429ab45", "text": "Quantum cognition research applies abstract, mathematical principles of quantum theory to inquiries in cognitive science. It differs fundamentally from alternative speculations about quantum brain processes. This topic presents new developments within this research program. In the introduction to this topic, we try to answer three questions: Why apply quantum concepts to human cognition? How is quantum cognitive modeling different from traditional cognitive modeling? What cognitive processes have been modeled using a quantum account? In addition, a brief introduction to quantum probability theory and a concrete example is provided to illustrate how a quantum cognitive model can be developed to explain paradoxical empirical findings in psychological literature.", "title": "" }, { "docid": "d97518a615c4f963d86e36c9dd30b643", "text": "In this paper, the Polyjet technology was applied to build high-Q X-band resonators and low loss filters for the first time. As one of state-of-the-art 3-D printing technologies, the Polyjet technique produces RF models with finest resolution and outstanding surface finish in a clean, fast and affordable way. The measured resonator with 0.3% frequency shift yielded a quality factor of 214 at 10.26 GHz. A Vertically stacked two-cavity bandpass filter with an insertion loss of 2.1 dB and 5.1% bandwidth (BW) was realized successfully. The dimensional tolerance of this process was found to be less than 0.5%. The well matched performance of the resonator and the filter, as well as the fine feature size indicate that the Polyjet process is suitable for the implementation of low loss and low cost RF devices.", "title": "" }, { "docid": "568fa874b944120be9bdb71bec2f5cec", "text": "Using a developmental systems perspective, this review focuses on how genetic predispositions interact with aspects of the eating environment to produce phenotypic food preferences. Predispositions include the unlearned, reflexive reactions to basic tastes: the preference for sweet and salty tastes, and the rejection of sour and bitter tastes. Other predispositions are (a) the neophobic reaction to new foods and (b) the ability to learn food preferences based on associations with the contexts and consequences of eating various foods. Whether genetic predispositions are manifested in food preferences that foster healthy diets depends on the eating environment, including food availability and child-feeding practices of the adults. Unfortunately, in the United States today, the ready availability of energy-dense foods, high in sugar, fat, and salt, provides an eating environment that fosters food preferences inconsistent with dietary guidelines, which can promote excess weight gain and obesity.", "title": "" }, { "docid": "7a2c19e94d07afbfe81c7875aed1ff23", "text": "We combine linear discriminant analysis (LDA) and K-means clustering into a coherent framework to adaptively select the most discriminative subspace. We use K-means clustering to generate class labels and use LDA to do subspace selection. The clustering process is thus integrated with the subspace selection process and the data are then simultaneously clustered while the feature subspaces are selected. We show the rich structure of the general LDA-Km framework by examining its variants and their relationships to earlier approaches. Relations among PCA, LDA, K-means are clarified. Extensive experimental results on real-world datasets show the effectiveness of our approach.", "title": "" }, { "docid": "67d704317471c71842a1dfe74ddd324a", "text": "Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of the them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited Based on the results, new directions are suggested In principal it is suggested to place emphasis on methodological quality -- not method quantity.", "title": "" }, { "docid": "f1cfe1cb5ddf46076dae6cd0f69d137f", "text": "SiC-SIT power semiconductor switching devices has an advantage that its switching time is high speed compared to those of other power semiconductor switching devices. We adopt newly developed SiC-SITs which have the maximum ratings 800V/4A and prepare a breadboard of a conventional single-ended push-pull(SEPP) high frequency inverter. This paper describes the characteristics of SiC-SIT on the basis of the experimental results of the breadboard. Its operational frequencies are varied at from 100 kHz to 250kHz with PWM control technique for output power regulation. Its load is induction fluid heating systems for super-heated-steam production.", "title": "" }, { "docid": "31ed2186bcd711ac4a5675275cd458eb", "text": "Location-aware wireless sensor networks will enable a new class of applications, and accurate range estimation is critical for this task. Low-cost location determination capability is studied almost entirely using radio frequency received signal strength (RSS) measurements, resulting in poor accuracy. More accurate systems use wide bandwidths and/or complex time-synchronized infrastructure. Low-cost, accurate ranging has proven difficult because small timing errors result in large range errors. This paper addresses estimation of the distance between wireless nodes using a two-way ranging technique that approaches the Cramér-Rao Bound on ranging accuracy in white noise and achieves 1-3 m accuracy in real-world ranging and localization experiments. This work provides an alternative to inaccurate RSS and complex, wide-bandwidth methods. Measured results using a prototype wireless system confirm performance in the real world.", "title": "" }, { "docid": "414bb4a869a900066806fa75edc38bd6", "text": "For nearly a century, scholars have sought to understand, measure, and explain giftedness. Succeeding theories and empirical investigations have often built on earlier work, complementing or sometimes clashing over conceptions of talent or contesting the mechanisms of talent development. Some have even suggested that giftedness itself is a misnomer, mistaken for the results of endless practice or social advantage. In surveying the landscape of current knowledge about giftedness and gifted education, this monograph will advance a set of interrelated arguments: The abilities of individuals do matter, particularly their abilities in specific talent domains; different talent domains have different developmental trajectories that vary as to when they start, peak, and end; and opportunities provided by society are crucial at every point in the talent-development process. We argue that society must strive to promote these opportunities but that individuals with talent also have some responsibility for their own growth and development. Furthermore, the research knowledge base indicates that psychosocial variables are determining influences in the successful development of talent. Finally, outstanding achievement or eminence ought to be the chief goal of gifted education. We assert that aspiring to fulfill one's talents and abilities in the form of transcendent creative contributions will lead to high levels of personal satisfaction and self-actualization as well as produce yet unimaginable scientific, aesthetic, and practical benefits to society. To frame our discussion, we propose a definition of giftedness that we intend to be comprehensive. Giftedness is the manifestation of performance that is clearly at the upper end of the distribution in a talent domain even relative to other high-functioning individuals in that domain. Further, giftedness can be viewed as developmental in that in the beginning stages, potential is the key variable; in later stages, achievement is the measure of giftedness; and in fully developed talents, eminence is the basis on which this label is granted. Psychosocial variables play an essential role in the manifestation of giftedness at every developmental stage. Both cognitive and psychosocial variables are malleable and need to be deliberately cultivated. Our goal here is to provide a definition that is useful across all domains of endeavor and acknowledges several perspectives about giftedness on which there is a fairly broad scientific consensus. Giftedness (a) reflects the values of society; (b) is typically manifested in actual outcomes, especially in adulthood; (c) is specific to domains of endeavor; (d) is the result of the coalescing of biological, pedagogical, psychological, and psychosocial factors; and (e) is relative not just to the ordinary (e.g., a child with exceptional art ability compared to peers) but to the extraordinary (e.g., an artist who revolutionizes a field of art). In this monograph, our goal is to review and summarize what we have learned about giftedness from the literature in psychological science and suggest some directions for the field of gifted education. We begin with a discussion of how giftedness is defined (see above). In the second section, we review the reasons why giftedness is often excluded from major conversations on educational policy, and then offer rebuttals to these arguments. In spite of concerns for the future of innovation in the United States, the education research and policy communities have been generally resistant to addressing academic giftedness in research, policy, and practice. The resistance is derived from the assumption that academically gifted children will be successful no matter what educational environment they are placed in, and because their families are believed to be more highly educated and hold above-average access to human capital wealth. These arguments run counter to psychological science indicating the need for all students to be challenged in their schoolwork and that effort and appropriate educational programing, training and support are required to develop a student's talents and abilities. In fact, high-ability students in the United States are not faring well on international comparisons. The scores of advanced students in the United States with at least one college-educated parent were lower than the scores of students in 16 other developed countries regardless of parental education level. In the third section, we summarize areas of consensus and controversy in gifted education, using the extant psychological literature to evaluate these positions. Psychological science points to several variables associated with outstanding achievement. The most important of these include general and domain-specific ability, creativity, motivation and mindset, task commitment, passion, interest, opportunity, and chance. Consensus has not been achieved in the field however in four main areas: What are the most important factors that contribute to the acuities or propensities that can serve as signs of potential talent? What are potential barriers to acquiring the \"gifted\" label? What are the expected outcomes of gifted education? And how should gifted students be educated? In the fourth section, we provide an overview of the major models of giftedness from the giftedness literature. Four models have served as the foundation for programs used in schools in the United States and in other countries. Most of the research associated with these models focuses on the precollegiate and early university years. Other talent-development models described are designed to explain the evolution of talent over time, going beyond the school years into adult eminence (but these have been applied only by out-of-school programs as the basis for educating gifted students). In the fifth section we present methodological challenges to conducting research on gifted populations, including definitions of giftedness and talent that are not standardized, test ceilings that are too low to measure progress or growth, comparison groups that are hard to find for extraordinary individuals, and insufficient training in the use of statistical methods that can address some of these challenges. In the sixth section, we propose a comprehensive model of trajectories of gifted performance from novice to eminence using examples from several domains. This model takes into account when a domain can first be expressed meaningfully-whether in childhood, adolescence, or adulthood. It also takes into account what we currently know about the acuities or propensities that can serve as signs of potential talent. Budding talents are usually recognized, developed, and supported by parents, teachers, and mentors. Those individuals may or may not offer guidance for the talented individual in the psychological strengths and social skills needed to move from one stage of development to the next. We developed the model with the following principles in mind: Abilities matter, domains of talent have varying developmental trajectories, opportunities need to be provided to young people and taken by them as well, psychosocial variables are determining factors in the successful development of talent, and eminence is the aspired outcome of gifted education. In the seventh section, we outline a research agenda for the field. This agenda, presented in the form of research questions, focuses on two central variables associated with the development of talent-opportunity and motivation-and is organized according to the degree to which access to talent development is high or low and whether an individual is highly motivated or not. Finally, in the eighth section, we summarize implications for the field in undertaking our proposed perspectives. These include a shift toward identification of talent within domains, the creation of identification processes based on the developmental trajectories of talent domains, the provision of opportunities along with monitoring for response and commitment on the part of participants, provision of coaching in psychosocial skills, and organization of programs around the tools needed to reach the highest possible levels of creative performance or productivity.", "title": "" }, { "docid": "9736331d674470adbe534503ef452cca", "text": "In this paper we present our system for human-in-theloop video object segmentation. The backbone of our system is a method for one-shot video object segmentation [3]. While fast, this method requires an accurate pixel-level segmentation of one (or several) frames as input. As manually annotating such a segmentation is impractical, we propose a deep interactive image segmentation method, that can accurately segment objects with only a handful of clicks. On the GrabCut dataset, our method obtains 90% IOU with just 3.8 clicks on average, setting the new state of the art. Furthermore, as our method iteratively refines an initial segmentation, it can effectively correct frames where the video object segmentation fails, thus allowing users to quickly obtain high quality results even on challenging sequences. Finally, we investigate usage patterns and give insights in how many steps users take to annotate frames, what kind of corrections they provide, etc., thus giving important insights for further improving interactive video segmentation.", "title": "" } ]
scidocsrr
bf4a38ce39c068b3f8160ade9b970d54
Security Analysis of Cloud Computing
[ { "docid": "ae369da37b2ff231082df12f15b26cb5", "text": "Although the cloud computing model is considered to be a very promising internet-based computing platform, it results in a loss of security control over the cloud-hosted assets. This is due to the outsourcing of enterprise IT assets hosted on third-party cloud computing platforms. Moreover, the lack of security constraints in the Service Level Agreements between the cloud providers and consumers results in a loss of trust as well. Obtaining a security certificate such as ISO 27000 or NIST-FISMA would help cloud providers improve consumers trust in their cloud platforms' security. However, such standards are still far from covering the full complexity of the cloud computing model. We introduce a new cloud security management framework based on aligning the FISMA standard to fit with the cloud computing model, enabling cloud providers and consumers to be security certified. Our framework is based on improving collaboration between cloud providers, service providers and service consumers in managing the security of the cloud platform and the hosted services. It is built on top of a number of security standards that assist in automating the security management process. We have developed a proof of concept of our framework using. NET and deployed it on a test bed cloud platform. We evaluated the framework by managing the security of a multi-tenant SaaS application exemplar.", "title": "" } ]
[ { "docid": "e8d48a28c208a0ff5c4e17dd205f8bd9", "text": "Red and blue light are both vital factors for plant growth and development. We examined how different ratios of red light to blue light (R/B) provided by light-emitting diodes affected photosynthetic performance by investigating parameters related to photosynthesis, including leaf morphology, photosynthetic rate, chlorophyll fluorescence, stomatal development, light response curve, and nitrogen content. In this study, lettuce plants (Lactuca sativa L.) were exposed to 200 μmol⋅m(-2)⋅s(-1) irradiance for a 16 h⋅d(-1) photoperiod under the following six treatments: monochromatic red light (R), monochromatic blue light (B) and the mixture of R and B with different R/B ratios of 12, 8, 4, and 1. Leaf photosynthetic capacity (A max) and photosynthetic rate (P n) increased with decreasing R/B ratio until 1, associated with increased stomatal conductance, along with significant increase in stomatal density and slight decrease in stomatal size. P n and A max under B treatment had 7.6 and 11.8% reduction in comparison with those under R/B = 1 treatment, respectively. The effective quantum yield of PSII and the efficiency of excitation captured by open PSII center were also significantly lower under B treatment than those under the other treatments. However, shoot dry weight increased with increasing R/B ratio with the greatest value under R/B = 12 treatment. The increase of shoot dry weight was mainly caused by increasing leaf area and leaf number, but no significant difference was observed between R and R/B = 12 treatments. Based on the above results, we conclude that quantitative B could promote photosynthetic performance or growth by stimulating morphological and physiological responses, yet there was no positive correlation between P n and shoot dry weight accumulation.", "title": "" }, { "docid": "f73881fdb6b732e7a6a79cd13618e649", "text": "Information exchange among coalition command and control (C2) systems in network-enabled environments requires ensuring that each recipient system understands and interprets messages exactly as the source system intended. The Semantic Interoperability Logical Framework (SILF) aims at meeting NATO's needs for semantically correct interoperability between C2 systems, as well as the need to adapt quickly to new missions and new combinations of coalition partners and systems. This paper presents an overview of the SILF framework and performs a detailed analysis of a case study for implementing SILF in a real-world military scenario.", "title": "" }, { "docid": "565a8ea886a586dc8894f314fa21484a", "text": "BACKGROUND\nThe Entity Linking (EL) task links entity mentions from an unstructured document to entities in a knowledge base. Although this problem is well-studied in news and social media, this problem has not received much attention in the life science domain. One outcome of tackling the EL problem in the life sciences domain is to enable scientists to build computational models of biological processes with more efficiency. However, simply applying a news-trained entity linker produces inadequate results.\n\n\nMETHODS\nSince existing supervised approaches require a large amount of manually-labeled training data, which is currently unavailable for the life science domain, we propose a novel unsupervised collective inference approach to link entities from unstructured full texts of biomedical literature to 300 ontologies. The approach leverages the rich semantic information and structures in ontologies for similarity computation and entity ranking.\n\n\nRESULTS\nWithout using any manual annotation, our approach significantly outperforms state-of-the-art supervised EL method (9% absolute gain in linking accuracy). Furthermore, the state-of-the-art supervised EL method requires 15,000 manually annotated entity mentions for training. These promising results establish a benchmark for the EL task in the life science domain. We also provide in depth analysis and discussion on both challenges and opportunities on automatic knowledge enrichment for scientific literature.\n\n\nCONCLUSIONS\nIn this paper, we propose a novel unsupervised collective inference approach to address the EL problem in a new domain. We show that our unsupervised approach is able to outperform a current state-of-the-art supervised approach that has been trained with a large amount of manually labeled data. Life science presents an underrepresented domain for applying EL techniques. By providing a small benchmark data set and identifying opportunities, we hope to stimulate discussions across natural language processing and bioinformatics and motivate others to develop techniques for this largely untapped domain.", "title": "" }, { "docid": "e3be398845434f3cd927a38bc4d4455f", "text": "Purpose Although extensive research exists regarding job satisfaction, many previous studies used a more restrictive, quantitative methodology. The purpose of this qualitative study is to capture the perceptions of hospital nurses within generational cohorts regarding their work satisfaction. Design/methodology/approach A preliminary qualitative, phenomenological study design explored hospital nurses' work satisfaction within generational cohorts - Baby Boomers (1946-1964), Generation X (1965-1980) and Millennials (1981-2000). A South Florida hospital provided the venue for the research. In all, 15 full-time staff nurses, segmented into generational cohorts, participated in personal interviews to determine themes related to seven established factors of work satisfaction: pay, autonomy, task requirements, administration, doctor-nurse relationship, interaction and professional status. Findings An analysis of the transcribed interviews confirmed the importance of the seven factors of job satisfaction. Similarities and differences between the generational cohorts related to a combination of stages of life and generational attributes. Practical implications The results of any qualitative research relate only to the specific venue studied and are not generalizable. However, the information gleaned from this study is transferable and other organizations are encouraged to conduct their own research and compare the results. Originality/value This study is unique, as the seven factors from an extensively used and highly respected quantitative research instrument were applied as the basis for this qualitative inquiry into generational cohort job satisfaction in a hospital setting.", "title": "" }, { "docid": "305a5a777cdffa7efc6e1715dfaac305", "text": "Open-loop transfer functions can be used to create closed-loop models of pulsewidth-modulated (PWM) converters. The closed-loop small-signal model can be used to design a controller for the switching converter with well-known linear control theory. The dynamics of the power stage for boost PWM dc-dc converter operating in continuous-conduction mode (CCM) are studied. The transfer functions from output current to output voltage, from duty cycle to output voltage including MOSFET delay, and from input voltage to output voltage are derived. The derivations are performed using an averaged linear circuit small-signal model of the boost converter for CCM. Experimental Bode plots and step responses were used to test the accuracy of the derived transfer functions. The theoretical and experimental responses were in excellent agreement, confirming the validity of the derived transfer functions", "title": "" }, { "docid": "a49ea9c9f03aa2d926faa49f4df63b7a", "text": "Deep stacked RNNs are usually hard to train. Recent studies have shown that shortcut connections across different RNN layers bring substantially faster convergence. However, shortcuts increase the computational complexity of the recurrent computations. To reduce the complexity, we propose the shortcut block, which is a refinement of the shortcut LSTM blocks. Our approach is to replace the self-connected parts (ct) with shortcuts (hl−2 t ) in the internal states. We present extensive empirical experiments showing that this design performs better than the original shortcuts. We evaluate our method on CCG supertagging task, obtaining a 8% relatively improvement over current state-of-the-art results.", "title": "" }, { "docid": "b4ac5df370c0df5fdb3150afffd9158b", "text": "The aggregation of many independent estimates can outperform the most accurate individual judgement 1–3 . This centenarian finding 1,2 , popularly known as the 'wisdom of crowds' 3 , has been applied to problems ranging from the diagnosis of cancer 4 to financial forecasting 5 . It is widely believed that social influence undermines collective wisdom by reducing the diversity of opinions within the crowd. Here, we show that if a large crowd is structured in small independent groups, deliberation and social influence within groups improve the crowd’s collective accuracy. We asked a live crowd (N = 5,180) to respond to general-knowledge questions (for example, \"What is the height of the Eiffel Tower?\"). Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates. We found that averaging consensus decisions was substantially more accurate than aggregating the initial independent opinions. Remarkably, combining as few as four consensus choices outperformed the wisdom of thousands of individuals. The collective wisdom of crowds often provides better answers to problems than individual judgements. Here, a large experiment that split a crowd into many small deliberative groups produced better estimates than the average of all answers in the crowd.", "title": "" }, { "docid": "226d6904cc052f300b32b29f4f800574", "text": "Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains realtime performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.", "title": "" }, { "docid": "2a384fe57f79687cba8482cabfb4243b", "text": "The Semantic Web graph is growing at an incredible pace, enabling opportunities to discover new knowledge by interlinking and analyzing previously unconnected data sets. This confronts researchers with a conundrum: Whilst the data is available the programming models that facilitate scalability and the infrastructure to run various algorithms on the graph are missing. Some use MapReduce – a good solution for many problems. However, even some simple iterative graph algorithms do not map nicely to that programming model requiring programmers to shoehorn their problem to the MapReduce model. This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms. We demonstrate that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/Collect adaptations of various relevant algorithms. Furthermore, we built and evaluated a prototype Signal/Collect framework that executes algorithms in our programming model. We empirically show that this prototype transparently scales and that guiding computations by scoring as well as asynchronicity can greatly improve the convergence of some example algorithms. We released the framework under the Apache License 2.0 (at http://www.ifi.uzh.ch/ddis/research/sc).", "title": "" }, { "docid": "07f4d14ddc034d9b5f803a7150b84764", "text": "Reinforcement learning (RL) has had mixed success when applied to games. Large state spaces and the curse of dimensionality have limited the ability for RL techniques to learn to play complex games in a reasonable length of time. We discuss a modification of Q-learning to use nearest neighbor states to exploit previous experience in the early stages of learning. A weighting on the state features is learned using metric learning techniques, such that neighboring states represent similar game situations. Our method is tested on the arcade game Frogger, and it is shown that some of the effects of the curse of dimensionality can be mitigated.", "title": "" }, { "docid": "d387558c10c164a49030e049f4eb03c7", "text": "This paper proposes a high-frequency dynamic circuit network model of a DC motor for predicting conductive and radiated emissions in low-voltage automotive applications, and discusses a study in which this model was examined. The proposed model is based on a behavioral approach. The methodology for testing various motors together with their filters and optimization of overall system performance by achieving minima of emissions is introduced.", "title": "" }, { "docid": "5325672f176fd572f7be68a466538d95", "text": "The successful execution of location-based and feature-based queries on spatial databases requires the construction of spatial indexes on the spatial attributes. This is not simple when the data is unstructured as is the case when the data is a collection of documents such as news articles, which is the domain of discourse, where the spatial attribute consists of text that can be (but is not required to be) interpreted as the names of locations. In other words, spatial data is specified using text (known as a toponym) instead of geometry, which means that there is some ambiguity involved. The process of identifying and disambiguating references to geographic locations is known as geotagging and involves using a combination of internal document structure and external knowledge, including a document-independent model of the audience's vocabulary of geographic locations, termed its spatial lexicon. In contrast to previous work, a new spatial lexicon model is presented that distinguishes between a global lexicon of locations known to all audiences, and an audience-specific local lexicon. Generic methods for inferring audiences' local lexicons are described. Evaluations of this inference method and the overall geotagging procedure indicate that establishing local lexicons cannot be overlooked, especially given the increasing prevalence of highly local data sources on the Internet, and will enable the construction of more accurate spatial indexes.", "title": "" }, { "docid": "5089dff6e717807450d7f185158cc542", "text": "Previous work has demonstrated that in the context of Massively Open Online Courses (MOOCs), doing activities is more predictive of learning than reading text or watching videos (Koedinger et al., 2015). This paper breaks down the general behaviors of reading and watching into finer behaviors, and considers how these finer behaviors may provide evidence for active learning as well. By characterizing learner strategies through patterns in their data, we can evaluate which strategies (or measures of them) are predictive of learning outcomes. We investigated strategies such as page re-reading (active reading) and video watching in response to an incorrect attempt (active watching) and found that they add predictive power beyond mere counts of the amount of doing, reading, and watching.", "title": "" }, { "docid": "e9251977f62ce9dddf16730dff8e47cb", "text": "INTRODUCTION AND OBJECTIVE\nCircumcision is one of the oldest surgical procedures and one of the most frequently performed worldwide. It can be done by many different techniques. This prospective series presents the results of Plastibell® circumcision in children older than 2 years of age, evaluating surgical duration, immediate and late complications, time for plastic device separation and factors associated with it.\n\n\nMATERIALS AND METHODS\nWe prospectively analyzed 119 children submitted to Plastic Device Circumcision with Plastibell® by only one surgeon from December 2009 to June 2011. In all cases the surgery was done under general anesthesia associated with dorsal penile nerve block. Before surgery length of the penis and latero-lateral diameter of the glans were measured. Surgical duration, time of Plastibell® separation and use of analgesic medication in the post-operative period were evaluated. Patients were followed on days 15, 45, 90 and 120 after surgery.\n\n\nRESULTS\nAge at surgery varied from 2 to 12.5 (5.9 ± 2.9) years old. Mean surgical time was 3.7 ± 2.0 minutes (1.9 to 9 minutes). Time for plastic device separation ranged from 6 to 26 days (mean: 16 ± 4.2 days), being 14.8 days for children younger than 5 years of age and 17.4 days for those older than 5 years of age (p < 0.0001). The diameter of the Plastibell® does not interfered in separations time (p = 0,484). Late complications occurred in 32 (26.8%) subjects, being the great majority of low clinical significance, especially prepucial adherences, edema of the mucosa and discrete hypertrophy of the scar, all resolving with clinical treatment. One patient still using diaper had meatus stenosis and in one case the Plastibell® device stayed between the glans and the prepuce and needed to be removed manually.\n\n\nCONCLUSIONS\nCircumcision using a plastic device is a safe, quick and an easy technique with low complications, that when occur are of low clinical importance and of easy resolution. The mean time for the device to fall is shorter in children under 6 years of age and it is not influenced by the diameter of the device.", "title": "" }, { "docid": "bcdb8fea60d1d13a8c5dcf7c49632653", "text": "There is a small but growing body of research investigating how teams form and how that affects how they perform. Much of that research focuses on teams that seek to accomplish certain tasks such as writing an article or performing a Broadway musical. There has been much less investigation of the relative performance of teams that form to directly compete against another team. In this study, we report on team-vs-team competitions in the multiplayer online battle arena game Dota 2. Here, the teams’ overall goal is to beat the opponent. We use this setting to observe multilevel factors influence the relative performance of the teams. Those factors include compositional factors or attributes of the individuals comprising a team, relational factors or prior relations among individuals within a team and ecosystem factors or overlapping prior membership of team members with others within the ecosystem of teams. We also study how these multilevel factors affect the duration of a match. Our results show that advantages at the compositional, relational and ecosystem levels predict which team will succeed in short or medium duration matches. Relational and ecosystem factors are particularly helpful in predicting the winner in short duration matches, whereas compositional factors are more important predicting winners in medium duration matches. However, the two types of relations have opposite effects on the duration of winning. None of the three multilevel factors help explain which team will win in long matches.", "title": "" }, { "docid": "83637dc7109acc342d50366f498c141a", "text": "With the further development of computer technology, the software development process has some new goals and requirements. In order to adapt to these changes, people has optimized and improved the previous method. At the same time, some of the traditional software development methods have been unable to adapt to the requirements of people. Therefore, in recent years there have been some new lightweight software process development methods, That is agile software development, which is widely used and promoted. In this paper the author will firstly introduces the background and development about agile software development, as well as comparison to the traditional software development. Then the second chapter gives the definition of agile software development and characteristics, principles and values. In the third chapter the author will highlight several different agile software development methods, and characteristics of each method. In the fourth chapter the author will cite a specific example, how agile software development is applied in specific areas.Finally the author will conclude his opinion. This article aims to give readers a overview of agile software development and how people use it in practice.", "title": "" }, { "docid": "2bdc4df73912f4f2be4436e1fdd16d69", "text": "Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological data set to a feature-based multiclass classification. In order to collect a physiological data set from multiple subjects over many weeks, we used a musical induction method that spontaneously leads subjects to real emotional states, without any deliberate laboratory setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity, and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, and positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. An improved recognition accuracy of 95 percent and 70 percent for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.", "title": "" }, { "docid": "9c1267f42c32f853db912a08eddb8972", "text": "IBM's Physical Analytics Integrated Data Repository and Services (PAIRS) is a geospatial Big Data service. PAIRS contains a massive amount of curated geospatial (or more precisely spatio-temporal) data from a large number of public and private data resources, and also supports user contributed data layers. PAIRS offers an easy-to-use platform for both rapid assembly and retrieval of geospatial datasets or performing complex analytics, lowering time-to-discovery significantly by reducing the data curation and management burden. In this paper, we review recent progress with PAIRS and showcase a few exemplary analytical applications which the authors are able to build with relative ease leveraging this technology.", "title": "" }, { "docid": "13cbca0e2780a95c1e9d4928dc9d236c", "text": "Matching user accounts can help us build better users’ profiles and benefit many applications. It has attracted much attention from both industry and academia. Most of existing works are mainly based on rich user profile attributes. However, in many cases, user profile attributes are unavailable, incomplete or unreliable, either due to the privacy settings or just because users decline to share their information. This makes the existing schemes quite fragile. Users often share their activities on different social networks. This provides an opportunity to overcome the above problem. We aim to address the problem of user identification based on User Generated Content (UGC). We first formulate the problem of user identification based on UGCs and then propose a UGC-based user identification model. A supervised machine learning based solution is presented. It has three steps: firstly, we propose several algorithms to measure the spatial similarity, temporal similarity and content similarity of two UGCs; secondly, we extract the spatial, temporal and content features to exploit these similarities; afterwards, we employ the machine learning method to match user accounts, and conduct the experiments on three ground truth datasets. The results show that the proposed method has given excellent performance with F1 values reaching 89.79%, 86.78% and 86.24% on three ground truth datasets, respectively. This work presents the possibility of matching user accounts with high accessible online data. © 2018 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "754108343e8a57852d4a54abf45f5c43", "text": "Precision measurement of dc high current is usually realized by second harmonic fluxgate current transducers, but the complicated modulation and demodulation circuits with high cost have been limiting their applications. This paper presents a low-cost transducer that can substitute the traditional ones for precision measurement of high current. The new transducer, based on the principle of zero-flux, is the combination of an improved self-oscillating fluxgate sensor with a magnetic integrator in a common feedback loop. The transfer function of the zero-flux control strategy of the transducer is established to verify the validity of the qualitative analysis on operating principle. Origins and major influence factors of the modulation ripple, respectively, caused by the useful signal extraction circuit and the transformer effect are studied, and related suppression methods are proposed, which can be considered as one of the major technical modifications for performance improvement. As verification, a prototype is realized, and several key specifications, including the linearity, small-signal bandwidth, modulation ripple, ratio stability under full load, power-on repeatability, magnetic error, and temperature coefficient, are characterized. Measurement results show that the new transducer with the maximum output ripple 0.3 μA can measure dc current up to ±600 A with a relative accuracy 1.3 ppm in the full scale, and it also can measure ac current and has a -3 dB bandwidth greater than 100 kHz.", "title": "" } ]
scidocsrr
7164762ab8395c098344983691ca03af
Remote Agent: To Boldly Go Where No AI System Has Gone Before
[ { "docid": "44a70fd9726f9ed9f92a9e5bf198788f", "text": "This paper proposes a new logic programming language called GOLOG whose interpreter automatically maintains an explicit representation of the dynamic world being modeled, on the basis of user supplied axioms about the preconditions and eeects of actions and the initial state of the world. This allows programs to reason about the state of the world and consider the eeects of various possible courses of action before committing to a particular behavior. The net eeect is that programs may be written at a much higher level of abstraction than is usually possible. The language appears well suited for applications in high level control of robots and industrial processes, intelligent software agents, discrete event simulation, etc. It is based on a formal theory of action speciied in an extended version of the situation calculus. A prototype implementation in Prolog has been developed.", "title": "" }, { "docid": "44f41d363390f6f079f2e67067ffa36d", "text": "The research described in this paper was supported in part by the National Science Foundation under Grants IST-g0-12418 and IST-82-10564. and in part by the Office of Naval Research under Grant N00014-80-C-0197. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100.0832 75¢", "title": "" } ]
[ { "docid": "cabdfcf94607adef9b07799aab463d64", "text": "Monitoring the health of the elderly living independently in their own homes is a key issue in building sustainable healthcare models which support a country's ageing population. Existing approaches have typically proposed remotely monitoring the behaviour of a household's occupants through the use of additional sensors. However the costs and privacy concerns of such sensors have significantly limited their potential for widespread adoption. In contrast, in this paper we propose an approach which detects Activities of Daily Living, which we use as a proxy for the health of the household residents. Our approach detects appliance usage from existing smart meter data, from which the unique daily routines of the household occupants are learned automatically via a log Gaussian Cox process. We evaluate our approach using two real-world data sets, and show it is able to detect over 80% of kettle uses while generating less than 10% false positives. Furthermore, our approach allows earlier interventions in households with a consistent routine and fewer false alarms in the remaining households, relative to a fixed-time intervention benchmark.", "title": "" }, { "docid": "078f875d35d61689475a1507c5525eaa", "text": "This paper discusses the actuator-level control of Valkyrie, a new humanoid robot designed by NASA’s Johnson Space Center in collaboration with several external partners. We focus on several topics pertaining to Valkyrie’s series elastic actuators including control architecture, controller design, and implementation in hardware. A decentralized approach is taken in controlling Valkyrie’s many series elastic degrees of freedom. By conceptually decoupling actuator dynamics from robot limb dynamics, we simplify the problem of controlling a highly complex system and streamline the controller development process compared to other approaches. This hierarchical control abstraction is realized by leveraging disturbance observers in the robot’s joint-level torque controllers. We apply a novel analysis technique to understand the ability of a disturbance observer to attenuate the effects of unmodeled dynamics. The performance of our control approach is demonstrated in two ways. First, we characterize torque tracking performance of a single Valkyrie actuator in terms of controllable torque resolution, tracking error, bandwidth, and power consumption. Second, we perform tests on Valkyrie’s arm, a serial chain of actuators, and demonstrate its ability to accurately track torques with our decentralized control approach.", "title": "" }, { "docid": "24880289ca2b6c31810d28c8363473b3", "text": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.", "title": "" }, { "docid": "a27d955a673d4a0f7fc45d83c1ed9377", "text": "Manifold Ranking (MR), a graph-based ranking algorithm, has been widely applied in information retrieval and shown to have excellent performance and feasibility on a variety of data types. Particularly, it has been successfully applied to content-based image retrieval, because of its outstanding ability to discover underlying geometrical structure of the given image database. However, manifold ranking is computationally very expensive, both in graph construction and ranking computation stages, which significantly limits its applicability to very large data sets. In this paper, we extend the original manifold ranking algorithm and propose a new framework named Efficient Manifold Ranking (EMR). We aim to address the shortcomings of MR from two perspectives: scalable graph construction and efficient computation. Specifically, we build an anchor graph on the data set instead of the traditional k-nearest neighbor graph, and design a new form of adjacency matrix utilized to speed up the ranking computation. The experimental results on a real world image database demonstrate the effectiveness and efficiency of our proposed method. With a comparable performance to the original manifold ranking, our method significantly reduces the computational time, makes it a promising method to large scale real world retrieval problems.", "title": "" }, { "docid": "1abe9e992970ef186f919e3bf54f775b", "text": "Carcinogenicity refers to a highly toxic end point of certain chemicals, and has become an important issue in the drug development process. In this study, three novel ensemble classification models, namely Ensemble SVM, Ensemble RF, and Ensemble XGBoost, were developed to predict carcinogenicity of chemicals using seven types of molecular fingerprints and three machine learning methods based on a dataset containing 1003 diverse compounds with rat carcinogenicity. Among these three models, Ensemble XGBoost is found to be the best, giving an average accuracy of 70.1 ± 2.9%, sensitivity of 67.0 ± 5.0%, and specificity of 73.1 ± 4.4% in five-fold cross-validation and an accuracy of 70.0%, sensitivity of 65.2%, and specificity of 76.5% in external validation. In comparison with some recent methods, the ensemble models outperform some machine learning-based approaches and yield equal accuracy and higher specificity but lower sensitivity than rule-based expert systems. It is also found that the ensemble models could be further improved if more data were available. As an application, the ensemble models are employed to discover potential carcinogens in the DrugBank database. The results indicate that the proposed models are helpful in predicting the carcinogenicity of chemicals. A web server called CarcinoPred-EL has been built for these models ( http://ccsipb.lnu.edu.cn/toxicity/CarcinoPred-EL/ ).", "title": "" }, { "docid": "0be3178ff2f412952934a49084ee8edc", "text": "This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon’s theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the covariation between residues in folded proteins. We also review applications to molecular sequence and structure analysis, and introduce new tools in the characterization of resistance mutations, and in drug design. In a curious twist of history, the dawn of the age of genomics has both seen the rise of the science of bioinformatics as a tool to cope with the enormous amounts of data being generated daily, and the decline of the theory of information as applied to molecular biology. Hailed as a harbinger of a “new movement” (Quastler 1953) along with Cybernetics, the principles of information theory were thought to be applicable to the higher functions of living organisms, and able to analyze such functions as metabolism, growth, and differentiation (Quastler 1953). Today, the metaphors and the jargon of information theory are still widely used (Maynard Smith 1999a, 1999b), as opposed to the mathematical formalism, which is too often considered to be inapplicable to biological information. Clearly, looking back it appears that too much hope was laid upon this theory’s relevance for biology. However, there was well-founded optimism that information theory ought to be able to address the complex issues associated with the storage of information in the genetic code, only to be repeatedly questioned and rebuked (see, e.g., Vincent 1994, Sarkar 1996). In this article, I outline the concepts of entropy and information (as defined by Shannon) in the context of molecular biology. We shall see that not only are these terms well-defined and useful, they also coincide precisely with what we intuitively mean when we speak about information stored in genes, for example. I then present examples of applications of the theory to measuring the information content of biomolecules, the identification of polymorphisms, RNA and protein secondary structure prediction, the prediction and analysis of molecular interactions, and drug design. 1 Entropy and Information Entropy and information are often used in conflicting manners in the literature. A precise understanding, both mathematical and intuitive, of the notion of information (and its relationship to entropy) is crucial for applications in molecular biology. Therefore, let us begin by outlining Shannon’s original entropy concept (Shannon, 1948). 1.1 Shannon’s Uncertainty Measure Entropy in Shannon’s theory (defined mathematically below) is a measure of uncertainty about the identity of objects in an ensemble. Thus, while “en-", "title": "" }, { "docid": "ddfd02c12c42edb2607a6f193f4c242b", "text": "We design the first Leakage-Resilient Identity-Based Encryption (LR-IBE) systems from static assumptions in the standard model. We derive these schemes by applying a hash proof technique from Alwen et.al. (Eurocrypt '10) to variants of the existing IBE schemes of Boneh-Boyen, Waters, and Lewko-Waters. As a result, we achieve leakage-resilience under the respective static assumptions of the original systems in the standard model, while also preserving the efficiency of the original schemes. Moreover, our results extend to the Bounded Retrieval Model (BRM), yielding the first regular and identity-based BRM encryption schemes from static assumptions in the standard model.\n The first LR-IBE system, based on Boneh-Boyen IBE, is only selectively secure under the simple Decisional Bilinear Diffie-Hellman assumption (DBDH), and serves as a stepping stone to our second fully secure construction. This construction is based on Waters IBE, and also relies on the simple DBDH. Finally, the third system is based on Lewko-Waters IBE, and achieves full security with shorter public parameters, but is based on three static assumptions related to composite order bilinear groups.", "title": "" }, { "docid": "d662536cbd7dca2ce06b3e1e44362776", "text": "Internet of Things (IoT) devices such as the Amazon Echo e a smart speaker developed by Amazon e are undoubtedly great sources of potential digital evidence due to their ubiquitous use and their always-on mode of operation, constituting a human-life's black box. The Amazon Echo in particular plays a centric role for the cloud-based intelligent virtual assistant (IVA) Alexa developed by Amazon Lab126. The Alexaenabled wireless smart speaker is the gateway for all voice commands submitted to Alexa. Moreover, the IVA interacts with a plethora of compatible IoT devices and third-party applications that leverage cloud resources. Understanding the complex cloud ecosystem that allows ubiquitous use of Alexa is paramount on supporting digital investigations when need raises. This paper discusses methods for digital forensics pertaining to the IVA Alexa's ecosystem. The primary contribution of this paper consists of a new efficient approach of combining cloud-native forensics with client-side forensics (forensics for companion devices), to support practical digital investigations. Based on a deep understanding of the targeted ecosystem, we propose a proof-of-concept tool, CIFT, that supports identification, acquisition and analysis of both native artifacts from the cloud and client-centric artifacts from local devices (mobile applications", "title": "" }, { "docid": "c02cc2c217da6614bccb90ac8b7c7506", "text": "This paper presents a method by which a reinforcement learning agent can automatically discover certain types of subgoals online. By creating useful new subgoals while learning, the agent is able to accelerate learning on the current task and to transfer its expertise to other, related tasks through the reuse of its ability to attain subgoals. The agent discovers subgoals based on commonalities across multiple paths to a solution. We cast the task of finding these commonalities as a multiple-instance learning problem and use the concept of diverse density to find solutions. We illustrate this approach using several gridworld tasks.", "title": "" }, { "docid": "998f2515ea7ceb02f867b709d4a987f9", "text": "Crop pest and disease diagnosis are amongst important issues arising in the agriculture sector since it has significant impacts on the production of agriculture for a nation. The applying of expert system technology for crop pest and disease diagnosis has the potential to quicken and improve advisory matters. However, the development of an expert system in relation to diagnosing pest and disease problems of a certain crop as well as other identical research works remains limited. Therefore, this study investigated the use of expert systems in managing crop pest and disease of selected published works. This article aims to identify and explain the trends of methodologies used by those works. As a result, a conceptual framework for managing crop pest and disease was proposed on basis of the selected previous works. This article is hoped to relatively benefit the growth of research works pertaining to the development of an expert system especially for managing crop pest and disease in the agriculture domain.", "title": "" }, { "docid": "b04a1c4a52cfe9310ff1e895ccdec35c", "text": "The problem of recovering the sparse and low-rank components of a matrix captures a broad spectrum of applications. Authors in [4] proposed the concept of ”rank-sparsity incoherence” to characterize the fundamental identifiability of the recovery, and derived practical sufficient conditions to ensure the high possibility of recovery. This exact recovery is achieved via solving a convex relaxation problem where the l1 norm and the nuclear norm are utilized for being surrogates of the sparsity and low-rank. Numerically, this convex relaxation problem was reformulated into a semi-definite programming (SDP) problem whose dimension is considerably enlarged, and this SDP reformulation was proposed to be solved by generic interior-point solvers in [4]. This paper focuses on the algorithmic improvement for the sparse and low-rank recovery. In particular, we observe that the convex relaxation problem generated by the approach of [4] is actually well-structured in both the objective function and constraint, and it fits perfectly the applicable range of the classical alternating direction method (ADM). Hence, we propose the ADM approach for accomplishing the sparse and low-rank recovery, by taking full exploitation to the high-level separable structure of the convex relaxation problem. Preliminary numerical results are reported to verify the attractive efficiency of the ADM approach for recovering sparse and low-rank components of matrices.", "title": "" }, { "docid": "aac360802c767fb9594e033341883578", "text": "The protection mechanisms of computer systems control the access to objects, especially information objects. The range of responsibilities of these mechanisms includes at one extreme completely isolating executing programs from each other, and at the other extreme permitting complete cooperation and shared access among executing programs. Within this range one can identify at least seven levels at which protection mechanisms can be conceived as being required, each level being more difficult than its predecessor to implement:\n 1. No sharing at all (complete isolation).\n 2. Sharing copies of programs or data files.\n 3. Sharing originals of programs or data files.\n 4. Sharing programming systems or subsystems.\n 5. Permitting the cooperation of mutually suspicious subsystems---e.g., as with debugging or proprietary subsystems.\n 6. Providing \"memoryless\" subsystems---i.e., systems which, having performed their tasks, are guaranteed to have kept no secret record of the task performed (an income-tax computing service, for example, must be allowed to keep billing information on its use by customers but not to store information secretly on customers' incomes).\n 7. Providing \"certified\" subsystems---i.e., those whose correctness has been completely validated and is guaranteed a priori.", "title": "" }, { "docid": "851de4b014dfeb6f470876896b0416b3", "text": "The design of bioinspired systems for chemical sensing is an engaging line of research in machine olfaction. Developments in this line could increase the lifetime and sensitivity of artificial chemo-sensory systems. Such approach is based on the sensory systems known in live organisms, and the resulting developed artificial systems are targeted to reproduce the biological mechanisms to some extent. Sniffing behaviour, sampling odours actively, has been studied recently in neuroscience, and it has been suggested that the respiration frequency is an important parameter of the olfactory system, since the odour perception, especially in complex scenarios such as novel odourants exploration, depends on both the stimulus identity and the sampling method. In this work we propose a chemical sensing system based on an array of 16 metal-oxide gas sensors that we combined with an external mechanical ventilator to simulate the biological respiration cycle. The tested gas classes formed a relatively broad combination of two analytes, acetone and ethanol, in binary mixtures. Two sets of lowfrequency and high-frequency features were extracted from the acquired signals to show that the high-frequency features contain information related to the gas class. In addition, such information is available at early stages of the measurement, which could make the technique ∗Corresponding author. Email address: andrey.ziyatdinov@upc.edu (Andrey Ziyatdinov) Preprint submitted to Sensors and Actuators B: Chemical August 15, 2014 suitable in early detection scenarios. The full data set is made publicly available to the community.", "title": "" }, { "docid": "26e24e4a59943f9b80d6bf307680b70c", "text": "We present a machine-learned model that can automatically detect when a student using an intelligent tutoring system is off-task, i.e., engaged in behavior which does not involve the system or a learning task. This model was developed using only log files of system usage (i.e. no screen capture or audio/video data). We show that this model can both accurately identify each student's prevalence of off-task behavior and can distinguish off-task behavior from when the student is talking to the teacher or another student about the subject matter. We use this model in combination with motivational and attitudinal instruments, developing a profile of the attitudes and motivations associated with off-task behavior, and compare this profile to the attitudes and motivations associated with other behaviors in intelligent tutoring systems. We discuss how the model of off-task behavior can be used within interactive learning environments which respond to when students are off-task.", "title": "" }, { "docid": "dc84e401709509638a1a9e24d7db53e1", "text": "AIM AND OBJECTIVES\nExocrine pancreatic insufficiency caused by inflammation or pancreatic tumors results in nutrient malfunction by a lack of digestive enzymes and neutralization compounds. Despite satisfactory clinical results with current enzyme therapies, a normalization of fat absorption in patients is rare. An individualized therapy is required that includes high dosage of enzymatic units, usage of enteric coating, and addition of gastric proton pump inhibitors. The key goal to improve this therapy is to identify digestive enzymes with high activity and stability in the gastrointestinal tract.\n\n\nMETHODS\nWe cloned and analyzed three novel ciliate lipases derived from Tetrahymena thermophila. Using highly precise pH-STAT-titration and colorimetric methods, we determined stability and lipolytic activity under physiological conditions in comparison with commercially available porcine and fungal digestive enzyme preparations. We measured from pH 2.0 to 9.0, with different bile salts concentrations, and substrates such as olive oil and fat derived from pig diet.\n\n\nRESULTS\nCiliate lipases CL-120, CL-130, and CL-230 showed activities up to 220-fold higher than Creon, pancreatin standard, and rizolipase Nortase within a pH range from pH 2.0 to 9.0. They are highly active in the presence of bile salts and complex pig diet substrate, and more stable after incubation in human gastric juice compared with porcine pancreatic lipase and rizolipase.\n\n\nCONCLUSIONS\nThe newly cloned and characterized lipases fulfilled all requirements for high activity under physiological conditions. These novel enzymes are therefore promising candidates for an improved enzyme replacement therapy for exocrine pancreatic insufficiency.", "title": "" }, { "docid": "ca23813c7caf031c97ae5c0db447d39d", "text": "Sequence-to-sequence models, such as attention-based models in automatic speech recognition (ASR), are typically trained to optimize the cross-entropy criterion which corresponds to improving the log-likelihood of the data. However, system performance is usually measured in terms of word error rate (WER), not log-likelihood. Traditional ASR systems benefit from discriminative sequence training which optimizes criteria such as the state-level minimum Bayes risk (sMBR) which are more closely related to WER. In the present work, we explore techniques to train attention-based models to directly minimize expected word error rate. We consider two loss functions which approximate the expected number of word errors: either by sampling from the model, or by using N-best lists of decoded hypotheses, which we find to be more effective than the sampling-based method. In experimental evaluations, we find that the proposed training procedure improves performance by up to 8.2% relative to the baseline system. This allows us to train grapheme-based, uni-directional attention-based models which match the performance of a traditional, state-of-the-art, discriminative sequence-trained system on a mobile voice-search task.", "title": "" }, { "docid": "649b1f289395aa6251fe9f3288209b67", "text": "Besides game-based learning, gamification is an upcoming trend in education, studied in various empirical studies and found in many major learning management systems. Employing a newly developed qualitative instrument for assessing gamification in a system, we studied five popular LMS for their specific implementations. The instrument enabled experts to extract affordances for gamification in the five categories experiential, mechanics, rewards, goals, and social. Results show large similarities in all of the systems studied and few varieties in approaches to gamification.", "title": "" }, { "docid": "4fe5c25f57d5fa5b71b0c2b9dae7db29", "text": "Position control of a quad tilt-wing UAV via a nonlinear hierarchical adaptive control approach is presented. The hierarchy consists of two levels. In the upper level, a model reference adaptive controller creates virtual control commands so as to make the UAV follow a given desired trajectory. The virtual control inputs are then converted to desired attitude angle references which are fed to the lower level attitude controller. Lower level controller is a nonlinear adaptive controller. The overall controller is developed for the full nonlinear dynamics of the tilt-wing UAV and thus no linearization is required. In addition, since the approach is adaptive, uncertainties in the UAV dynamics can be handled. Performance of the controller is presented via simulation results.", "title": "" }, { "docid": "e4dc1f30a914dc6f710f23b5bc047978", "text": "Intelligence, expertise, ability and talent, as these terms have traditionally been used in education and psychology, are socially agreed upon labels that minimize the dynamic, evolving, and contextual nature of individual–environment relations. These hypothesized constructs can instead be described as functional relations distributed across whole persons and particular contexts through which individuals appear knowledgeably skillful. The purpose of this article is to support a concept of ability and talent development that is theoretically grounded in 5 distinct, yet interrelated, notions: ecological psychology, situated cognition, distributed cognition, activity theory, and legitimate peripheral participation. Although talent may be reserved by some to describe individuals possessing exceptional ability and ability may be described as an internal trait, in our description neither ability nor talent are possessed. Instead, they are treated as equivalent terms that can be used to describe functional transactions that are situated across person-in-situation. Further, and more important, by arguing that ability is part of the individual–environment transaction, we take the potential to appear talented out of the hands (or heads) of the few and instead treat it as an opportunity that is available to all although it may be actualized more frequently by some.", "title": "" } ]
scidocsrr
5cb830db37198d577a47ceb88886514f
Social media analytics: a survey of techniques, tools and platforms
[ { "docid": "477e4a6930d147a598e1e0c453062ed2", "text": "Stock markets are driven by a multitude of dynamics in which facts and beliefs play a major role in affecting the price of a company’s stock. In today’s information age, news can spread around the globe in some cases faster than they happen. While it can be beneficial for many applications including disaster prevention, our aim in this thesis is to use the timely release of information to model the stock market. We extract facts and beliefs from the population using one of the fastest growing social networking tools on the Internet, namely Twitter. We examine the use of Natural Language Processing techniques with a predictive machine learning approach to analyze millions of Twitter posts from which we draw distinctive features to create a model that enables the prediction of stock prices. We selected several stocks from the NASDAQ stock exchange and collected Intra-Day stock quotes during a period of two weeks. We build different feature representations from the raw Twitter posts and combined them with the stock price in order to build a regression model using the Support Vector Regression algorithm. We were able to build models of the stocks which predicted discrete prices that were close to a strong baseline. We further investigated the prediction of future prices, on average predicting 15 minutes ahead of the actual price, and evaluated the results using a Virtual Stock Trading Engine. These results were in general promising, but contained also some random variations across the different datasets.", "title": "" }, { "docid": "07ffe189312da8519c4a6260402a0b22", "text": "Computational social science is an emerging research area at the intersection of computer science, statistics, and the social sciences, in which novel computational methods are used to answer questions about society. The field is inherently collaborative: social scientists provide vital context and insight into pertinent research questions, data sources, and acquisition methods, while statisticians and computer scientists contribute expertise in developing mathematical models and computational tools. New, large-scale sources of demographic, behavioral, and network data from the Internet, sensor networks, and crowdsourcing systems augment more traditional data sources to form the heart of this nascent discipline, along with recent advances in machine learning, statistics, social network analysis, and natural language processing. The related research area of social computing deals with the mechanisms through which people interact with computational systems, examining questions such as how and why people contribute user-generated content and how to design systems that better enable them to do so. Examples of social computing systems include prediction markets, crowdsourcing markets, product review sites, and collaboratively edited wikis, all of which encapsulate some notion of aggregating crowd wisdom, beliefs, or ideas—albeit in different ways. Like computational social science, social computing blends techniques from machine learning and statistics with ideas from the social sciences. For example, the economics literature on incentive design has been especially influential.", "title": "" }, { "docid": "1bb246ec4e68bd7072983e2824e8f9ff", "text": "With the increasing availability of electronic documents and the rapid growth of the World Wide Web, the task of automatic categorization of documents became the key method for organizing the information and knowledge discovery. Proper classification of e-documents, online news, blogs, e-mails and digital libraries need text mining, machine learning and natural language processing techniques to get meaningful knowledge. The aim of this paper is to highlight the important techniques and methodologies that are employed in text documents classification, while at the same time making awareness of some of the interesting challenges that remain to be solved, focused mainly on text representation and machine learning techniques. This paper provides a review of the theory and methods of document classification and text mining, focusing on the existing literature.", "title": "" } ]
[ { "docid": "06e3d228e9fac29dab7180e56f087b45", "text": "Curiosity is thought to be an intrinsically motivated driving force for seeking information. Thus, the opportunity for an information gain (IG) should instil curiosity in humans and result in information gathering actions. To investigate if, and how, information acts as an intrinsic reward, a search task was set in a context of blurred background images which could be revealed by iterative clicking. The search task was designed such that it prevented efficient IG about the underlying images. Participants therefore had to trade between clicking regions with high search target probability or high expected image content information. Image content IG was established from “information-maps” based on participants exploration with the intention of understanding (1) the main theme of the image and (2) how interesting the image might appear to others. Note that IG is in this thesis not identical with the information theoretic concept of information gain, the quantities are however probably related. It was hypothesised that participants would be distracted by visually informative regions and that images independently rated as more interesting would yield higher image based IG. It was also hypothesised that image based IG would increase as a function of time. Results show that participants sometimes explored images driven by curiosity, and that there was considerable individual variation in which images participants were curious about. Independent interest ratings did not account for image based IG. The level of IG increased over trials, interestingly without affecting participants’ performance on the visual search task designed to prevent IG. Results support that IG is rewarding as participants learned to optimize IG over trials without compromising performance on the extrinsically motivated search; managing to both keep the cake and eat it.", "title": "" }, { "docid": "877e7654a4e42ab270a96e87d32164fd", "text": "The presence of gender stereotypes in many aspects of society is a well-known phenomenon. In this paper, we focus on studying such stereotypes and bias in Hindi movie industry (Bollywood). We analyze movie plots and posters for all movies released since 1970. The gender bias is detected by semantic modeling of plots at inter-sentence and intrasentence level. Different features like occupation, introduction of cast in text, associated actions and descriptions are captured to show the pervasiveness of gender bias and stereotype in movies. We derive a semantic graph and compute centrality of each character and observe similar bias there. We also show that such bias is not applicable for movie posters where females get equal importance even though their character has little or no impact on the movie plot. Furthermore, we explore the movie trailers to estimate on-screen time for males and females and also study the portrayal of emotions by gender in them. The silver lining is that our system was able to identify 30 movies over last 3 years where such stereotypes were broken.", "title": "" }, { "docid": "16a1f15e8e414b59a230fb4a28c53cc7", "text": "In this study we examined whether the effects of mental fatigue on behaviour are due to reduced action monitoring as indexed by the error related negativity (Ne/ERN), N2 and contingent negative variation (CNV) event-related potential (ERP) components. Therefore, we had subjects perform a task, which required a high degree of action monitoring, continuously for 2h. In addition we tried to relate the observed behavioural and electrophysiological changes to motivational processes and individual differences. Changes in task performance due to fatigue were accompanied by a decrease in Ne/ERN and N2 amplitude, reflecting impaired action monitoring, as well as a decrease in CNV amplitude which reflects reduced response preparation with increasing fatigue. Increasing the motivational level of our subjects resulted in changes in behaviour and brain activity that were different for individual subjects. Subjects that increased their performance accuracy displayed an increase in Ne/ERN amplitude, while subjects that increased their response speed displayed an increase in CNV amplitude. We will discuss the effects prolonged task performance on the behavioural and physiological indices of action monitoring, as well as the relationship between fatigue, motivation and individual differences.", "title": "" }, { "docid": "13572c74a989b8677eec026788b381fe", "text": "We examined the effect of stereotype threat on blood pressure reactivity. Compared with European Americans, and African Americans under little or no stereotype threat, African Americans under stereotype threat exhibited larger increases in mean arterial blood pressure during an academic test, and performed more poorly on difficult test items. We discuss the significance of these findings for understanding the incidence of hypertension among African Americans.", "title": "" }, { "docid": "32bdd9f720989754744eddb9feedbf32", "text": "Readability depends on many factors ranging from shallow features like word length to semantic ones like coherence. We introduce novel graph-based coherence features based on frequent subgraphs and compare their ability to assess the readability of Wall Street Journal articles. In contrast to Pitler and Nenkova (2008) some of our graph-based features are significantly correlated with human judgments. We outperform Pitler and Nenkova (2008) in the readability ranking task by more than 5% accuracy thus establishing a new state-of-the-art on this dataset.", "title": "" }, { "docid": "c8c82af8fc9ca5e0adac5b8b6a14031d", "text": "PURPOSE\nTo systematically review the results of arthroscopic transtibial pullout repair (ATPR) for posterior medial meniscus root tears.\n\n\nMETHODS\nA systematic electronic search of the PubMed database and the Cochrane Library was performed in September 2014 to identify studies that reported clinical, radiographic, or second-look arthroscopic outcomes of ATPR for posterior medial meniscus root tears. Included studies were abstracted regarding study characteristics, patient demographic characteristics, surgical technique, rehabilitation, and outcome measures. The methodologic quality of the included studies was assessed with the modified Coleman Methodology Score.\n\n\nRESULTS\nSeven studies with a total of 172 patients met the inclusion criteria. The mean patient age was 55.3 years, and 83% of patients were female patients. Preoperative and postoperative Lysholm scores were reported for all patients. After a mean follow-up period of 30.2 months, the Lysholm score increased from 52.4 preoperatively to 85.9 postoperatively. On conventional radiographs, 64 of 76 patients (84%) showed no progression of Kellgren-Lawrence grading. Magnetic resonance imaging showed no progression of cartilage degeneration in 84 of 103 patients (82%) and showed reduced medial meniscal extrusion in 34 of 61 patients (56%). On the basis of second-look arthroscopy and magnetic resonance imaging in 137 patients, the healing status was rated as complete in 62%, partial in 34%, and failed in 3%. Overall, the methodologic quality of the included studies was fair, with a mean modified Coleman Methodology Score of 63.\n\n\nCONCLUSIONS\nATPR significantly improves functional outcome scores and seems to prevent the progression of osteoarthritis in most patients, at least during a short-term follow-up. Complete healing of the repaired root and reduction of meniscal extrusion seem to be less predictable, being observed in only about 60% of patients. Conclusions about the progression of osteoarthritis and reduction of meniscal extrusion are limited by the small portion of patients undergoing specific evaluation (44% and 35% of the study group, respectively).\n\n\nLEVEL OF EVIDENCE\nLevel IV, systematic review of Level III and IV studies.", "title": "" }, { "docid": "a0c37bb6608f51f7095d6e5392f3c2f9", "text": "The main study objective was to develop robust processing and analysis techniques to facilitate the use of small-footprint lidar data for estimating plot-level tree height by measuring individual trees identifiable on the three-dimensional lidar surface. Lidar processing techniques included data fusion with multispectral optical data and local filtering with both square and circular windows of variable size. The lidar system used for this study produced an average footprint of 0.65 m and an average distance between laser shots of 0.7 m. The lidar data set was acquired over deciduous and coniferous stands with settings typical of the southeastern United States. The lidar-derived tree measurements were used with regression models and cross-validation to estimate tree height on 0.017-ha plots. For the pine plots, lidar measurements explained 97 percent of the variance associated with the mean height of dominant trees. For deciduous plots, regression models explained 79 percent of the mean height variance for dominant trees. Filtering for local maximum with circular windows gave better fitting models for pines, while for deciduous trees, filtering with square windows provided a slightly better model fit. Using lidar and optical data fusion to differentiate between forest types provided better results for estimating average plot height for pines. Estimating tree height for deciduous plots gave superior results without calibrating the search window size based on forest type. Introduction Laser scanner systems currently available have experienced a remarkable evolution, driven by advances in the remote sensing and surveying industry. Lidar sensors offer impressive performance that challange physical barriers in the optical and electronic domain by offering a high density of points at scanning frequencies of 50,000 pulses/second, multiple echoes per laser pulse, intensity measurements for the returning signal, and centimeter accuracy for horizontal and vertical positioning. Given a high density of points, processing algorithms can identify single trees or groups of trees in order to extract various measurements on their three-dimensional representation (e.g., Hyyppä and Inkinen, 2002). Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height Sorin C. Popescu and Randolph H. Wynne The foundations of lidar forest measurements lie with the photogrammetric techniques developed to assess tree height, volume, and biomass. Lidar characteristics, such as high sampling intensity, extensive areal coverage, ability to penetrate beneath the top layer of the canopy, precise geolocation, and accurate ranging measurements, make airborne laser systems useful for directly assessing vegetation characteristics. Early lidar studies had been used to estimate forest vegetation characteristics, such as percent canopy cover, biomass (Nelson et al., 1984; Nelson et al., 1988a; Nelson et al., 1988b; Nelson et al., 1997), and gross-merchantable timber volume (Maclean and Krabill, 1986). Research efforts investigated the estimation of forest stand characteristics with scanning lasers that provided lidar data with either relatively large laser footprints, i.e., 5 to 25 m (Harding et al., 1994; Lefsky et al., 1997; Weishampel et al., 1997; Blair et al., 1999; Lefsky et al., 1999; Means et al., 1999) or small footprints, but with only one laser return (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Hyyppä et al., 2001). A small-footprint lidar with the potential to record the entire time-varying distribution of returned pulse energy or waveform was used by Nilsson (1996) for measuring tree heights and stand volume. As more systems operate with high performance, research efforts for forestry applications of lidar have become very intense and resulted in a series of studies that proved that lidar technology is well suited for providing estimates of forest biophysical parameters. Needs for timely and accurate estimates of forest biophysical parameters have arisen in response to increased demands on forest inventory and analysis. The height of a forest stand is a crucial forest inventory attribute for calculating timber volume, site potential, and silvicultural treatment scheduling. Measuring of stand height by current manual photogrammetric or field survey techniques is time consuming and rather expensive. Tree heights have been derived from scanning lidar data sets and have been compared with ground-based canopy height measurements (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Næsset and Bjerknes, 2001; Næsset and Økland, 2002; Persson et al., 2002; Popescu, 2002; Popescu et al., 2002; Holmgren et al., 2003; McCombs et al., 2003). Despite the intense research efforts, practical applications of P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G May 2004 5 8 9 Department of Forestry, Virginia Tech, 319 Cheatham Hall (0324), Blacksburg, VA 24061 (wynne@vt.edu). S.C. Popescu is presently with the Spatial Sciences Laboratory, Department of Forest Science, Texas A&M University, 1500 Research Parkway, Suite B223, College Station, TX 778452120 (s-popescu@tamu.edu). Photogrammetric Engineering & Remote Sensing Vol. 70, No. 5, May 2004, pp. 589–604. 0099-1112/04/7005–0589/$3.00/0 © 2004 American Society for Photogrammetry and Remote Sensing 02-099.qxd 4/5/04 10:44 PM Page 589", "title": "" }, { "docid": "71243804831966d5a312f5dc3c3a61a5", "text": "Datasets: KBP 2015 for training and news articles in KBP 2016, 2017 for testing. Model BCUB CEAFE MUC BLANC AV G KBP 2016 Local Classifier 51.47 47.96 26.29 30.82 39.13 Basic ILP 51.44 47.77 26.65 30.95 39.19 +Discourse 51.67 49.1 34.08 34.08 42.23 Joint Learning 50.16 48.59 32.41 32.72 40.97 KBP 2017 Local Classifier 50.24 48.47 30.81 29.94 39.87 Basic ILP 50.4 48.49 31.33 30.58 40.2 +Discourse 50.35 48.61 37.24 31.94 42.04 Table 2: Results for event coreference resolution systems on the KBP 2016 and 2017 corpus. Joint Learning results correspond to the result files evaluated in Lu and Ng, 2017.", "title": "" }, { "docid": "ef011f601c37f0d08c2567fe7e231324", "text": "We live in a world were data are generated from a myriad of sources, and it is really cheap to collect and storage such data. However, the real benefit is not related to the data itself, but with the algorithms that are capable of processing such data in a tolerable elapse time, and to extract valuable knowledge from it. Therefore, the use of Big Data Analytics tools provide very significant advantages to both industry and academia. The MapReduce programming framework can be stressed as the main paradigm related with such tools. It is mainly identified by carrying out a distributed execution for the sake of providing a high degree of scalability, together with a fault-", "title": "" }, { "docid": "84f47a0e228bc672c4e0c29dd217f6df", "text": "Semantic annotation plays an important role for semantic-aware web service discovery, recommendation and composition. In recent years, many approaches and tools have emerged to assist in semantic annotation creation and analysis. However, the Quality of Semantic Annotation (QoSA) is largely overlooked despite of its significant impact on the effectiveness of semantic-aware solutions. Moreover, improving the QoSA is time-consuming and requires significant domain knowledge. Therefore, how to verify and improve the QoSA has become a critical issue for semantic web services. In order to facilitate this process, this paper presents a novel lifecycle framework aiming at QoSA assessment and optimization. The QoSA is formally defined as the success rate of web service invocations, associated with a verification framework. Based on a local instance repository constructed from the execution information of the invocations, a two-layer optimization method including a local-feedback strategy and a global-feedback one is proposed to improve the QoSA. Experiments on real-world web services show that our framework can gain 65.95%~148.16% improvement in QoSA, compared with the original annotation without optimization.", "title": "" }, { "docid": "afa70058c6df7b85040ce40be752bb89", "text": "The authors attempt to identify the various causes of stator and rotor failures in three-phase squirrel cage induction motors. A specific methodology is proposed to facilitate an accurate analysis of these failures. It is noted that, due to the destructive nature of most failures, it is not easy, and is sometimes impossible, to determine the primary cause of failure. By a process of elimination, one can usually be assured of properly identifying the most likely cause of the failure. It is pointed out that the key point in going through this process of elimination is to use the basic steps of analyzing the failure class and pattern, noting the general motor appearance, identifying the operating condition at the time of failure, and gaining knowledge of the past history of the motor and application.<<ETX>>", "title": "" }, { "docid": "7f8777738b0e135f2d5d3666677d58dd", "text": "Ph. D. Sandra Margeti} Department of Laboratory Haematology and Coagulation Clinical Institute of Chemistry Medical School University Hospital Sestre milosrdnice Vinogradska 29 10 000 Zagreb, Croatia Tel: +385 1 3787 115 Fax: +385 1 3768 280 e-mail: margeticsandraagmail.com Summary: Laboratory investigation of thrombophilia is aimed at detecting the well-established hereditary and acquired causes of venous thromboembolism, including activated protein C resistance/factor V Leiden mutation, prothrombin G20210A mutation, deficiencies of the physio logical anticoagulants antithrombin, protein C and protein S, the presence of antiphospholipid antibodies and increased plasma levels of homocysteine and coagulation factor VIII. In contrast, investigation of dysfibrinogenemia, a very rare thrombophilic risk factor, should only be considered in a patient with evidence of familial or recurrent thrombosis in the absence of all evaluated risk factors mentioned above. At this time, thrombophilia investigation is not recommended for other potential hereditary or acquired risk factors whose association with increased risk for thrombosis has not been proven sufficiently to date. In order to ensure clinical relevance of testing and to avoid any misinterpretation of results, laboratory investigation of thrombophilia should always be performed in accordance with the recommended guidelines on testing regarding the careful selection of patients, time of testing and assays and assay methods used. The aim of this review is to summarize the most important aspects on thrombophilia testing, including whom and when to test, what assays and assay methods to use and all other variables that should be considered when performing laboratory investigation of thrombophilia.", "title": "" }, { "docid": "6d4aa3d000a565b562186d3b3dba1a22", "text": "Recommender systems are software applications that provide or suggest items to intended users. These systems use filtering techniques to provide recommendations. The major ones of these techniques are collaborative-based filtering technique, content-based technique, and hybrid algorithm. The motivation came as a result of the need to integrate recommendation feature in digital libraries in order to reduce information overload. Content-based technique is adopted because of its suitability in domains or situations where items are more than the users. TF-IDF (Term Frequency Inverse Document Frequency) and cosine similarity were used to determine how relevant or similar a research paper is to a user's query or profile of interest. Research papers and user's query were represented as vectors of weights using Keyword-based Vector Space model. The weights indicate the degree of association between a research paper and a user's query. This paper also presents an algorithm to provide or suggest recommendations based on users' query. The algorithm employs both TF-IDF weighing scheme and cosine similarity measure. Based on the result or output of the system, integrating recommendation feature in digital libraries will help library users to find most relevant research papers to their needs. Keywords—Recommender Systems; Content-Based Filtering; Digital Library; TF-IDF; Cosine Similarity; Vector Space Model", "title": "" }, { "docid": "29734bed659764e167beac93c81ce0a7", "text": "Fashion classification encompasses the identification of clothing items in an image. The field has applications in social media, e-commerce, and criminal law. In our work, we focus on four tasks within the fashion classification umbrella: (1) multiclass classification of clothing type; (2) clothing attribute classification; (3) clothing retrieval of nearest neighbors; and (4) clothing object detection. We report accuracy measurements for clothing style classification (50.2%) and clothing attribute classification (74.5%) that outperform baselines in the literature for the associated datasets. We additionally report promising qualitative results for our clothing retrieval and clothing object detection tasks.", "title": "" }, { "docid": "809d795cb5e5147979f8dffed44e6a44", "text": "The goal of this paper is to study the characteristics of various control architectures (e.g. centralized, hierarchical, distributed, and hybrid) for a team of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) in performing collaborative surveillance and crowd control. To this end, an overview of different control architectures is first provided covering their functionalities and interactions. Then, three major functional modules needed for crowd control are discussed under those architectures, including 1) crowd detection using computer vision algorithms, 2) crowd tracking using an enhanced information aggregation strategy, and 3) vehicles motion planning using a graph search algorithm. Depending on the architectures, these modules can be placed in the ground control center or embedded in each vehicle. To test and demonstrate characteristics of various control architectures, a testbed has been developed involving these modules and various hardware and software components, such as 1) assembled UAVs and UGV, 2) a real-time simulator (in Repast Simphony), 3) off-the-shelf ARM architecture computers (ODROID-U2/3), 4) autopilot units with GPS sensors, and 5) multipoint wireless networks using XBee. Experiments successfully demonstrate the pros and cons of the considered control architectures in terms of computational performance in responding to different system conditions (e.g. information sharing).", "title": "" }, { "docid": "bb0ac3d88646bf94710a4452ddf50e51", "text": "Everyday knowledge about living things, physical objects and the beliefs and desires of other people appears to be organized into sophisticated systems that are often called intuitive theories. Two long term goals for psychological research are to understand how these theories are mentally represented and how they are acquired. We argue that the language of thought hypothesis can help to address both questions. First, compositional languages can capture the content of intuitive theories. Second, any compositional language will generate an account of theory learning which predicts that theories with short descriptions tend to be preferred. We describe a computational framework that captures both ideas, and compare its predictions to behavioral data from a simple theory learning task. Any comprehensive account of human knowledge must acknowledge two principles. First, everyday knowledge is more than a list of isolated facts, and much of it appears to be organized into richly structured systems that are sometimes called intuitive theories. Even young children, for instance, have systematic beliefs about domains including folk physics, folk biology, and folk psychology [10]. Second, some aspects of these theories appear to be learned. Developmental psychologists have explored how intuitive theories emerge over the first decade of life, and at least some of these changes appear to result from learning. Although theory learning raises some challenging problems, two computational principles that may support this ability have been known for many years. First, a theory-learning system must be able to represent the content of any theory that it acquires. A learner that cannot represent a given system of concepts is clearly unable to learn this system from data. Second, there will always be many systems of concepts that are compatible with any given data set, and a learner must rely on some a priori ordering of the set of possible theories to decide which candidate is best [5, 9]. Loosely speaking, this ordering can be identified with a simplicity measure, or a prior distribution over the space of possible theories. There is at least one natural way to connect these two computational principles. Suppose that intuitive theories are represented in a “language of thought:” a language that allows complex concepts to be represented as combinations of simpler concepts [5]. A compositional language provides a straightforward way to construct sophisticated theories, but also provides a natural ordering over the resulting space of theories: the a priori probability of a theory can be identified with its length in this representation language [3, 7]. Combining this prior distribution with an engine for Bayesian inference leads immediately to a computational account of theory learning. There may be other ways to explain how people represent and acquire complex systems of knowledge, but it is striking that the “language of thought” hypothesis can address both questions. This paper describes a computational framework that helps to explain how theories are acquired, and that can be used to evaluate different proposals about the language of thought. Our approach builds on previous discussions of concept learning that have explored the link between compositional representations and inductive inference. Two recent approaches propose that concepts are represented in a form of propositional logic, and that the a priori plausibility of an inductive hypothesis is related to the length of its representation in this language [4, 6]. Our approach is similar in spirit, but is motivated in part by the need for languages richer than propositional logic. The framework we present is extremely general, and is compatible with virtually any representation language, including various forms of predicate logic. Methods for learning theories expressed in predicate logic have previously been explored in the field of Inductive Logic Programming, and we recently proposed a theory-learning model that is inspired by this tradition [7]. Our current approach is motivated by similar goals, but is better able to account for the discovery of abstract theoretical laws. The next section describes our computational framework and introduces the specific logical language that we will consider throughout. Our framework allows relatively sophisticated theories to be represented and learned, but we evaluate it here by applying it to a simple learning problem and comparing its predictions with human inductive inferences. A Bayesian approach to theory discovery Suppose that a learner observes some of the relationships that hold among a fixed, finite set of entities, and wishes to discover a theory that accounts for these data. Suppose, for instance, that the entities are thirteen adults from a remote tribe (a through m), and that the data specify that the spouse relation (S(·, ·)) is true of some pairs (Figure 1). One candidate theory states that S(·, ·) is a symmetric relation, that some of the individuals are male (M(·)), that marriages are permitted only between males and non-males, and that males may take multiple spouses but non-males may have only one spouse (Figure 1b). Other theories are possible, including the theory which states only that S(·, ·) is symmetric. Accounts of theory learning should distinguish between at least three kinds of entities: theories, models, and data. A theory is a set of statements that captures constraints on possible configurations of the world. For instance, the theory in Figure 1b rules out configurations where the spouse relation is asymmetric. A model of a theory specifies the extension", "title": "" }, { "docid": "f120d34996b155a413247add6adc6628", "text": "The storage and computation requirements of Convolutional Neural Networks (CNNs) can be prohibitive for exploiting these models over low-power or embedded devices. This paper reduces the computational complexity of the CNNs by minimizing an objective function, including the recognition loss that is augmented with a sparsity-promoting penalty term. The sparsity structure of the network is identified using the Alternating Direction Method of Multipliers (ADMM), which is widely used in large optimization problems. This method alternates between promoting the sparsity of the network and optimizing the recognition performance, which allows us to exploit the two-part structure of the corresponding objective functions. In particular, we take advantage of the separability of the sparsity-inducing penalty functions to decompose the minimization problem into sub-problems that can be solved sequentially. Applying our method to a variety of state-of-the-art CNN models, our proposed method is able to simplify the original model, generating models with less computation and fewer parameters, while maintaining and often improving generalization performance. Accomplishments on a variety of models strongly verify that our proposed ADMM-based method can be a very useful tool for simplifying and improving deep CNNs.", "title": "" }, { "docid": "4fabfd530004921901d09134ebfd0eae", "text": "“Additive Manufacturing Technologies: 3D Printing, Rapid Prototyping, and Direct Digital Manufacturing” is authored by Ian Gibson, David Rosen and Brent Stucker, who collectively possess 60 years’ experience in the fi eld of additive manufacturing (AM). This is the second edition of the book which aims to include current developments and innovations in a rapidly changing fi eld. Its primary aim is to serve as a teaching aid for developing and established curricula, therefore becoming an all-encompassing introductory text for this purpose. It is also noted that researchers may fi nd the text useful as a guide to the ‘state-of-the-art’ and to identify research opportunities. The book is structured to provide justifi cation and information for the use and development of AM by using standardised terminology to conform to standards (American Society for Testing and Materials (ASTM) F42) introduced since the fi rst edition. The basic principles and historical developments for AM are introduced in summary in the fi rst three chapters of the book and this serves as an excellent introduction for the uninitiated. Chapters 4–11 focus on the core technologies of AM individually and, in most cases, in comprehensive detail which gives those interested in the technical application and development of the technologies a solid footing. The remaining chapters provide guidelines and examples for various stages of the process including machine and/or materials selection, design considerations and software limitations, applications and post-processing considerations.", "title": "" }, { "docid": "3b9b49f8c2773497f8e05bff4a594207", "text": "SSD (Single Shot Detector) is one of the state-of-the-art object detection algorithms, and it combines high detection accuracy with real-time speed. However, it is widely recognized that SSD is less accurate in detecting small objects compared to large objects, because it ignores the context from outside the proposal boxes. In this paper, we present CSSD–a shorthand for context-aware single-shot multibox object detector. CSSD is built on top of SSD, with additional layers modeling multi-scale contexts. We describe two variants of CSSD, which differ in their context layers, using dilated convolution layers (DiCSSD) and deconvolution layers (DeCSSD) respectively. The experimental results show that the multi-scale context modeling significantly improves the detection accuracy. In addition, we study the relationship between effective receptive fields (ERFs) and the theoretical receptive fields (TRFs), particularly on a VGGNet. The empirical results further strengthen our conclusion that SSD coupled with context layers achieves better detection results especially for small objects (+3.2%AP@0.5 on MSCOCO compared to the newest SSD), while maintaining comparable runtime performance.", "title": "" }, { "docid": "3de480136e0fd3e122e63870bc49ebdb", "text": "22FDX™ is the industry's first FDSOI technology architected to meet the requirements of emerging mobile, Internet-of-Things (IoT), and RF applications. This platform achieves the power and performance efficiency of a 16/14nm FinFET technology in a cost effective, planar device architecture that can be implemented with ∼30% fewer masks. Performance comes from a second generation FDSOI transistor, which produces nFET (pFET) drive currents of 910μΑ/μm (856μΑ/μm) at 0.8 V and 100nA/μm Ioff. For ultra-low power applications, it offers low-voltage operation down to 0.4V V<inf>min</inf> for 8T logic libraries, as well as 0.62V and 0.52V V<inf>min</inf> for high-density and high-current bitcells, ultra-low leakage devices approaching 1pA/μm I<inf>off</inf>, and body-biasing to actively trade-off power and performance. Superior RF/Analog characteristics to FinFET are achieved including high f<inf>T</inf>/f<inf>MAx</inf> of 375GHz/290GHz and 260GHz/250GHz for nFET and pFET, respectively. The high f<inf>MAx</inf> extends the capabilities to 5G and milli-meter wave (>24GHz) RF applications.", "title": "" } ]
scidocsrr
d0c05a044c6125d249b7c4de875fe40c
Energy efficient IoT-based smart home
[ { "docid": "a8dbb16b9a0de0dcae7780ffe4c0b7cf", "text": "Increased demands on implementation of wireless sensor networks in automation praxis result in relatively new wireless standard – ZigBee. The new workplace was established on the Department of Electronics and Multimedia Communications (DEMC) in order to keep up with ZigBee modern trend. This paper presents the first results and experiences associated with ZigBee based wireless sensor networking. The accent was put on suitable chipset platform selection for Home Automation wireless network purposes. Four popular microcontrollers was selected to investigate memory requirements and power consumption such as ARM, x51, HCS08, and Coldfire. Next objective was to test interoperability between various manufacturers’ platforms, what is important feature of ZigBee standard. A simple network based on ZigBee physical layer as well as ZigBee compliant network were made to confirm the basic ZigBee interoperability.", "title": "" }, { "docid": "72ac5e1ec4cfdcd2e7b0591adce56091", "text": "Th is paper presents a low cost and flexib le home control and monitoring system using an embedded micro -web server, with IP connectivity for accessing and controlling devices and appliances remotely using Android based Smart phone app. The proposed system does not require a dedicated server PC with respect to similar systems and offers a novel communicat ion protocol to monitor and control the home environment with more than just the switching functionality. To demonstrate the feasibility and effectiveness of this system, devices such as light switches, power p lug, temperature sensor and current sensor have been integrated with the proposed home control system.", "title": "" } ]
[ { "docid": "7575e468e2ee37c9120efb5e73e4308a", "text": "In this demo, we present Cleanix, a prototype system for cleaning relational Big Data. Cleanix takes data integrated from multiple data sources and cleans them on a shared-nothing machine cluster. The backend system is built on-top-of an extensible and flexible data-parallel substrate - the Hyracks framework. Cleanix supports various data cleaning tasks such as abnormal value detection and correction, incomplete data filling, de-duplication, and conflict resolution. We demonstrate that Cleanix is a practical tool that supports effective and efficient data cleaning at the large scale.", "title": "" }, { "docid": "833c110e040311909aa38b05e457b2af", "text": "The scyphozoan Aurelia aurita (Linnaeus) s. l., is a cosmopolitan species-complex which blooms seasonally in a variety of coastal and shelf sea environments around the world. We hypothesized that ephyrae of Aurelia sp.1 are released from the inner part of the Jiaozhou Bay, China when water temperature is below 15°C in late autumn and winter. The seasonal occurrence, growth, and variation of the scyphomedusa Aurelia sp.1 were investigated in Jiaozhou Bay from January 2011 to December 2011. Ephyrae occurred from May through June with a peak abundance of 2.38 ± 0.56 ind/m3 in May, while the temperature during this period ranged from 12 to 18°C. The distribution of ephyrae was mainly restricted to the coastal area of the bay, and the abundance was higher in the dock of the bay than at the other inner bay stations. Young medusae derived from ephyrae with a median diameter of 9.74 ± 1.7 mm were present from May 22. Growth was rapid from May 22 to July 2 with a maximum daily growth rate of 39%. Median diameter of the medusae was 161.80 ± 18.39 mm at the beginning of July. In August, a high proportion of deteriorated specimens was observed and the median diameter decreased. The highest average abundance is 0.62 ± 1.06 ind/km2 in Jiaozhou Bay in August. The abundance of Aurelia sp.1 medusae was low from September and then decreased to zero. It is concluded that water temperature is the main driver regulating the life cycle of Aurelia sp.1 in Jiaozhou Bay.", "title": "" }, { "docid": "ecd4dd9d8807df6c8194f7b4c7897572", "text": "Nitric oxide (NO) mediates activation of satellite precursor cells to enter the cell cycle. This provides new precursor cells for skeletal muscle growth and muscle repair from injury or disease. Targeting a new drug that specifically delivers NO to muscle has the potential to promote normal function and treat neuromuscular disease, and would also help to avoid side effects of NO from other treatment modalities. In this research, we examined the effectiveness of the NO donor, iosorbide dinitrate (ISDN), and a muscle relaxant, methocarbamol, in promoting satellite cell activation assayed by muscle cell DNA synthesis in normal adult mice. The work led to the development of guaifenesin dinitrate (GDN) as a new NO donor for delivering nitric oxide to muscle. The results revealed that there was a strong increase in muscle satellite cell activation and proliferation, demonstrated by a significant 38% rise in DNA synthesis after a single transdermal treatment with the new compound for 24 h. Western blot and immunohistochemistry analyses showed that the markers of satellite cell myogenesis, expression of myf5, myogenin, and follistatin, were increased after 24 h oral administration of the compound in adult mice. This research extends our understanding of the outcomes of NO-based treatments aimed at promoting muscle regeneration in normal tissue. The potential use of such treatment for conditions such as muscle atrophy in disuse and aging, and for the promotion of muscle tissue repair as required after injury or in neuromuscular diseases such as muscular dystrophy, is highlighted.", "title": "" }, { "docid": "339de1d21bfce2e9a8848d6fbc2792d4", "text": "The extraction of local tempo and beat information from audio recordings constitutes a challenging task, particularly for music that reveals significant tempo variations. Furthermore, the existence of various pulse levels such as measure, tactus, and tatum often makes the determination of absolute tempo problematic. In this paper, we present a robust mid-level representation that encodes local tempo information. Similar to the well-known concept of cyclic chroma features, where pitches differing by octaves are identified, we introduce the concept of cyclic tempograms, where tempi differing by a power of two are identified. Furthermore, we describe how to derive cyclic tempograms from music signals using two different methods for periodicity analysis and finally sketch some applications to tempo-based audio segmentation.", "title": "" }, { "docid": "fdbad1d98044bf6494bfd211e6116db8", "text": "This work addresses the problem of underwater archaeological surveys from the point of view of knowledge. We propose an approach based on underwater photogrammetry guided by a representation of knowledge used, as structured by ontologies. Survey data feed into to ontologies and photogrammetry in order to produce graphical results. This paper focuses on the use of ontologies during the exploitation of 3D results. JAVA software dedicated to photogram‐ metry and archaeological survey has been mapped onto an OWL formalism. The use of procedural attachment in a dual representation (JAVA OWL) of the involved concepts allows us to access computational facilities directly from OWL. As SWRL The use of rules illustrates very well such ‘double formalism’ as well as the use of computational capabilities of ‘rules logical expression’. We present an application that is able to read the ontology populated with a photo‐ grammetric survey data. Once the ontology is read, it is possible to produce a 3D representation of the individuals and observing graphically the results of logical spatial queries on the ontology. This work is done on a very important underwater archaeological site in Malta named Xlendi, probably the most ancient shipwreck of the central Mediterranean Sea.", "title": "" }, { "docid": "912c213d76bed8d90f636ea5a6220cf1", "text": "Across the world, organizations have teams gathering threat data to protect themselves from incoming cyber attacks and maintain a strong cyber security posture. Teams are also sharing information, because along with the data collected internally, organizations need external information to have a comprehensive view of the threat landscape. The information about cyber threats comes from a variety of sources, including sharing communities, open-source and commercial sources, and it spans many different levels and timescales. Immediately actionable information are often low-level indicators of compromise, such as known malware hash values or command-and-control IP addresses, where an actionable response can be executed automatically by a system. Threat intelligence refers to more complex cyber threat information that has been acquired or inferred through the analysis of existing information. Information such as the different malware families used over time with an attack or the network of threat actors involved in an attack, is valuable information and can be vital to understanding and predicting attacks, threat developments, as well as informing law enforcement investigations. This information is also actionable, but on a longer time scale. Moreover, it requires action and decision-making at the human level. There is a need for effective intelligence management platforms to facilitate the generation, refinement, and vetting of data, post sharing. In designing such a system, some of the key challenges that exist include: working with multiple intelligence sources, combining and enriching data for greater intelligence, determining intelligence relevance based on technical constructs, and organizational input, delivery into organizational workflows and into technological products. This paper discusses these challenges encountered and summarizes the community requirements and expectations for an all-encompassing Threat Intelligence Management Platform. The requirements expressed in this paper, when implemented, will serve as building blocks to create systems that can maximize value out of a set of collected intelligence and translate those findings into action for a broad range of stakeholders.", "title": "" }, { "docid": "81ddc594cb4b7f3ed05908ce779aa4f4", "text": "Since the length of microblog texts, such as tweets, is strictly limited to 140 characters, traditional Information Retrieval techniques suffer from the vocabulary mismatch problem severely and cannot yield good performance in the context of microblogosphere. To address this critical challenge, in this paper, we propose a new language modeling approach for microblog retrieval by inferring various types of context information. In particular, we expand the query using knowledge terms derived from Freebase so that the expanded one can better reflect users’ search intent. Besides, in order to further satisfy users’ real-time information need, we incorporate temporal evidences into the expansion method, which can boost recent tweets in the retrieval results with respect to a given topic. Experimental results on two official TREC Twitter corpora demonstrate the significant superiority of our approach over baseline methods.", "title": "" }, { "docid": "8abbd5e2ab4f419a4ca05277a8b1b6a5", "text": "This paper presents an innovative broadband millimeter-wave single balanced diode mixer that makes use of a substrate integrated waveguide (SIW)-based 180 hybrid. It has low conversion loss of less than 10 dB, excellent linearity, and high port-to-port isolations over a wide frequency range of 20 to 26 GHz. The proposed mixer has advantages over previously reported millimeter-wave mixer structures judging from a series of aspects such as cost, ease of fabrication, planar construction, and broadband performance. Furthermore, a receiver front-end that integrates a high-performance SIW slot-array antenna and our proposed mixer is introduced. Based on our proposed receiver front-end structure, a K-band wireless communication system with M-ary quadrature amplitude modulation is developed and demonstrated for line-of-sight channels. Excellent overall error vector magnitude performance has been obtained.", "title": "" }, { "docid": "c2816721fa6ccb0d676f7fdce3b880d4", "text": "Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.", "title": "" }, { "docid": "bcf27c4f750ab74031b8638a9b38fd87", "text": "δ opioid receptor (DOR) was the first opioid receptor of the G protein‑coupled receptor family to be cloned. Our previous studies demonstrated that DOR is involved in regulating the development and progression of human hepatocellular carcinoma (HCC), and is involved in the regulation of the processes of invasion and metastasis of HCC cells. However, whether DOR is involved in the development and progression of drug resistance in HCC has not been reported and requires further elucidation. The aim of the present study was to investigate the expression levels of DOR in the drug‑resistant HCC BEL‑7402/5‑fluorouracil (BEL/FU) cell line, and its effects on drug resistance, in order to preliminarily elucidate the effects of DOR in HCC drug resistance. The results of the present study demonstrated that DOR was expressed at high levels in the BEL/FU cells, and the expression levels were higher, compared with those in normal liver cells. When the expression of DOR was silenced, the proliferation of the drug‑resistant HCC cells were unaffected. However, when the cells were co‑treated with a therapeutic dose of 5‑FU, the proliferation rate of the BEL/FU cells was significantly inhibited, a large number of cells underwent apoptosis, cell cycle progression was arrested and changes in the expression levels of drug‑resistant proteins were observed. Overall, the expression of DOR was upregulated in the drug‑resistant HCC cells, and its functional status was closely associated with drug resistance in HCC. Therefore, DOR may become a recognized target molecule with important roles in the clinical treatment of drug‑resistant HCC.", "title": "" }, { "docid": "f1a36f7fd6b3cf42415c483f6ade768e", "text": "The current paradigm of genomic studies of complex diseases is association and correlation analysis. Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the identified genetic variants by GWAS can only explain a small proportion of the heritability of complex diseases. A large fraction of genetic variants is still hidden. Association analysis has limited power to unravel mechanisms of complex diseases. It is time to shift the paradigm of genomic analysis from association analysis to causal inference. Causal inference is an essential component for the discovery of mechanism of diseases. This paper will review the major platforms of the genomic analysis in the past and discuss the perspectives of causal inference as a general framework of genomic analysis. In genomic data analysis, we usually consider four types of associations: association of discrete variables (DNA variation) with continuous variables (phenotypes and gene expressions), association of continuous variables (expressions, methylations, and imaging signals) with continuous variables (gene expressions, imaging signals, phenotypes, and physiological traits), association of discrete variables (DNA variation) with binary trait (disease status) and association of continuous variables (gene expressions, methylations, phenotypes, and imaging signals) with binary trait (disease status). In this paper, we will review algorithmic information theory as a general framework for causal discovery and the recent development of statistical methods for causal inference on discrete data, and discuss the possibility of extending the association analysis of discrete variable with disease to the causal analysis for discrete variable and disease.", "title": "" }, { "docid": "b374975ae9690f96ed750a888713dbc9", "text": "We present a method for densely computing local spherical histograms of oriented gradients (SHOG) in volumetric images. The descriptors are based on the continuous representation of the orientation histograms in the harmonic domain, which we compute very efficiently via spherical tensor products and the fast Fourier transformation. Building upon these local spherical histogram representations, we utilize the Harmonic Filter to create a generic rotation invariant object detection system that benefits from both the highly discriminative representation of local image patches in terms of histograms of oriented gradients and an adaptable trainable voting scheme that forms the filter. We exemplarily demonstrate the effectiveness of such dense spherical 3D descriptors in a detection task on biological 3D images. In a direct comparison to existing approaches, our new filter reveals superior performance.", "title": "" }, { "docid": "5df529aca774edb0eb5ac93c9a0ce3b7", "text": "The GRASP (Graphical Representations of Algorithms, Structures, and Processes) project, which has successfully prototyped a new algorithmic-level graphical representation for software—the control structure diagram (CSD)—is currently focused on the generation of a new fine-grained complexity metric called the complexity profile graph (CPG). The primary impetus for creation and refinement of the CSD and the CPG is to improve the comprehension efficiency of software and, as a result, improve reliability and reduce costs. The current GRASP release provides automatic CSD generation for Ada 95, C, C++, Java, and Very High-Speed Integrated Circuit Hardware Description Language (VHDL) source code, and CPG generation for Ada 95 source code. The examples and discussion in this article are based on using GRASP with Ada 95.", "title": "" }, { "docid": "ef771fa11d9f597f94cee5e64fcf9fd6", "text": "The principle of artificial curiosity directs active exploration towards the most informative or most interesting data. We show its usefulness for global black box optimization when data point evaluations are expensive. Gaussian process regression is used to model the fitness function based on all available observations so far. For each candidate point this model estimates expected fitness reduction, and yields a novel closed-form expression of expected information gain. A new type of Pareto-front algorithm continually pushes the boundary of candidates not dominated by any other known data according to both criteria, using multi-objective evolutionary search. This makes the exploration-exploitation trade-off explicit, and permits maximally informed data selection. We illustrate the robustness of our approach in a number of experimental scenarios.", "title": "" }, { "docid": "7df626465d52dfe5859e682c685c62bc", "text": "This thesis addresses the task of error detection in the choice of content words focusing on adjective–noun and verb–object combinations. We show that error detection in content words is an under-explored area in research on learner language since (i) most previous approaches to error detection and correction have focused on other error types, and (ii) the approaches that have previously addressed errors in content words have not performed error detection proper. We show why this task is challenging for the existing algorithms and propose a novel approach to error detection in content words. We note that since content words express meaning, an error detection algorithm should take the semantic properties of the words into account. We use a compositional distribu-tional semantic framework in which we represent content words using their distributions in native English, while the meaning of the combinations is represented using models of com-positional semantics. We present a number of measures that describe different properties of the modelled representations and can reliably distinguish between the representations of the correct and incorrect content word combinations. Finally, we cast the task of error detection as a binary classification problem and implement a machine learning classifier that uses the output of the semantic measures as features. The results of our experiments confirm that an error detection algorithm that uses semantically motivated features achieves good accuracy and precision and outperforms the state-of-the-art approaches. We conclude that the features derived from the semantic representations encode important properties of the combinations that help distinguish the correct combinations from the incorrect ones. The approach presented in this work can naturally be extended to other types of content word combinations. Future research should also investigate how the error correction component for content word combinations could be implemented. 3 4 Acknowledgements First and foremost, I would like to express my profound gratitude to my supervisor, Ted Briscoe, for his constant support and encouragement throughout the course of my research. This work would not have been possible without his invaluable guidance and advice. I am immensely grateful to my examiners, Ann Copestake and Stephen Pulman, for providing their advice and constructive feedback on the final version of the dissertation. I am also thankful to my colleagues at the Natural Language and Information Processing research group for the insightful and inspiring discussions over these years. In particular, I would like to express my gratitude to would like to thank …", "title": "" }, { "docid": "becd45d50ead03dd5af399d5618f1ea3", "text": "This paper presents a new paradigm of cryptography, quantum public-key cryptosystems. In quantum public-key cryptosystems, all parties including senders, receivers and adversaries are modeled as quantum (probabilistic) poly-time Turing (QPT) machines and only classical channels (i.e., no quantum channels) are employed. A quantum trapdoor one-way function, f , plays an essential role in our system, in which a QPT machine can compute f with high probability, any QPT machine can invert f with negligible probability, and a QPT machine with trapdoor data can invert f . This paper proposes a concrete scheme for quantum public-key cryptosystems: a quantum public-key encryption scheme or quantum trapdoor one-way function. The security of our schemes is based on the computational assumption (over QPT machines) that a class of subset-sum problems is intractable against any QPT machine. Our scheme is very efficient and practical if Shor’s discrete logarithm algorithm is efficiently realized on a quantum machine.", "title": "" }, { "docid": "b19ba18dbce648ca584d5c41b406d1be", "text": "Communication experiments using normal lab setup, which includes more hardware and less software raises the cost of the total system. The method proposed here provides a new approach through which all the analog and digital experiments can be performed using a single hardware-USRP (Universal Software Radio Peripheral) and software-GNU Radio Companion (GRC). Initially, networking setup is formulated using SDR technology. Later on, one of the analog communication experiments is demonstrated in real time using the GNU Radio Companion, RTL-SDR and USRP. The entire communication system is less expensive as the system uses a single reprogrammable hardware and most of the focus of the system deals with the software part.", "title": "" }, { "docid": "41de353ad7e48d5f354893c6045394e2", "text": "This paper proposes a long short-term memory recurrent neural network (LSTM-RNN) for extracting melody and simultaneously detecting regions of melody from polyphonic audio using the proposed harmonic sum loss. The previous state-of-the-art algorithms have not been based on machine learning techniques and certainly not on deep architectures. The harmonics structure in melody is incorporated in the loss function to attain robustness against both octave mismatch and interference from background music. Experimental results show that the performance of the proposed method is better than or comparable to other state-of-the-art algorithms.", "title": "" }, { "docid": "4b886b3ee8774a1e3110c12bdbdcbcdf", "text": "To engage in cooperative activities with human partners, robots have to possess basic interactive abilities and skills. However, programming such interactive skills is a challenging task, as each interaction partner can have different timing or an alternative way of executing movements. In this paper, we propose to learn interaction skills by observing how two humans engage in a similar task. To this end, we introduce a new representation called Interaction Primitives. Interaction primitives build on the framework of dynamic motor primitives (DMPs) by maintaining a distribution over the parameters of the DMP. With this distribution, we can learn the inherent correlations of cooperative activities which allow us to infer the behavior of the partner and to participate in the cooperation. We will provide algorithms for synchronizing and adapting the behavior of humans and robots during joint physical activities.", "title": "" } ]
scidocsrr
452a8dd80aff6209e6f6b9783a8a8340
PReMVOS : Proposal-generation , Refinement and Merging for the DAVIS Challenge on Video Object Segmentation 2018
[ { "docid": "254b82dc2ee6f0d753803c4a90dcd8b7", "text": "Most previous bounding-box-based segmentation methods assume the bounding box tightly covers the object of interest. However it is common that a rectangle input could be too large or too small. In this paper, we propose a novel segmentation approach that uses a rectangle as a soft constraint by transforming it into an Euclidean distance map. A convolutional encoder-decoder network is trained end-to-end by concatenating images with these distance maps as inputs and predicting the object masks as outputs. Our approach gets accurate segmentation results given sloppy rectangles while being general for both interactive segmentation and instance segmentation. We show our network extends to curve-based input without retraining. We further apply our network to instance-level semantic segmentation and resolve any overlap using a conditional random field. Experiments on benchmark datasets demonstrate the effectiveness of the proposed approaches.", "title": "" }, { "docid": "bb404c0e94cde80436d2c5bd331c7816", "text": "Conventional video segmentation methods often rely on temporal continuity to propagate masks. Such an assumption suffers from issues like drifting and inability to handle large displacement. To overcome these issues, we formulate an effective mechanism to prevent the target from being lost via adaptive object re-identification. Specifically, our Video Object Segmentation with Re-identification (VSReID) model includes a mask propagation module and a ReID module. The former module produces an initial probability map by flow warping while the latter module retrieves missing instances by adaptive matching. With these two modules iteratively applied, our VS-ReID records a global mean (Region Jaccard and Boundary F measure) of 0.699, the best performance in 2017 DAVIS Challenge.", "title": "" }, { "docid": "33de1981b2d9a0aa1955602006d09db9", "text": "The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.", "title": "" } ]
[ { "docid": "597e00855111c6ccb891c96e28f23585", "text": "Global food demand is increasing rapidly, as are the environmental impacts of agricultural expansion. Here, we project global demand for crop production in 2050 and evaluate the environmental impacts of alternative ways that this demand might be met. We find that per capita demand for crops, when measured as caloric or protein content of all crops combined, has been a similarly increasing function of per capita real income since 1960. This relationship forecasts a 100-110% increase in global crop demand from 2005 to 2050. Quantitative assessments show that the environmental impacts of meeting this demand depend on how global agriculture expands. If current trends of greater agricultural intensification in richer nations and greater land clearing (extensification) in poorer nations were to continue, ~1 billion ha of land would be cleared globally by 2050, with CO(2)-C equivalent greenhouse gas emissions reaching ~3 Gt y(-1) and N use ~250 Mt y(-1) by then. In contrast, if 2050 crop demand was met by moderate intensification focused on existing croplands of underyielding nations, adaptation and transfer of high-yielding technologies to these croplands, and global technological improvements, our analyses forecast land clearing of only ~0.2 billion ha, greenhouse gas emissions of ~1 Gt y(-1), and global N use of ~225 Mt y(-1). Efficient management practices could substantially lower nitrogen use. Attainment of high yields on existing croplands of underyielding nations is of great importance if global crop demand is to be met with minimal environmental impacts.", "title": "" }, { "docid": "a733ec1769f40b0d7580409ef2705682", "text": "BACKGROUND\nBiomedical data, e.g. from knowledge bases and ontologies, is increasingly made available following open linked data principles, at best as RDF triple data. This is a necessary step towards unified access to biological data sets, but this still requires solutions to query multiple endpoints for their heterogeneous data to eventually retrieve all the meaningful information. Suggested solutions are based on query federation approaches, which require the submission of SPARQL queries to endpoints. Due to the size and complexity of available data, these solutions have to be optimised for efficient retrieval times and for users in life sciences research. Last but not least, over time, the reliability of data resources in terms of access and quality have to be monitored. Our solution (BioFed) federates data over 130 SPARQL endpoints in life sciences and tailors query submission according to the provenance information. BioFed has been evaluated against the state of the art solution FedX and forms an important benchmark for the life science domain.\n\n\nMETHODS\nThe efficient cataloguing approach of the federated query processing system 'BioFed', the triple pattern wise source selection and the semantic source normalisation forms the core to our solution. It gathers and integrates data from newly identified public endpoints for federated access. Basic provenance information is linked to the retrieved data. Last but not least, BioFed makes use of the latest SPARQL standard (i.e., 1.1) to leverage the full benefits for query federation. The evaluation is based on 10 simple and 10 complex queries, which address data in 10 major and very popular data sources (e.g., Dugbank, Sider).\n\n\nRESULTS\nBioFed is a solution for a single-point-of-access for a large number of SPARQL endpoints providing life science data. It facilitates efficient query generation for data access and provides basic provenance information in combination with the retrieved data. BioFed fully supports SPARQL 1.1 and gives access to the endpoint's availability based on the EndpointData graph. Our evaluation of BioFed against FedX is based on 20 heterogeneous federated SPARQL queries and shows competitive execution performance in comparison to FedX, which can be attributed to the provision of provenance information for the source selection.\n\n\nCONCLUSION\nDeveloping and testing federated query engines for life sciences data is still a challenging task. According to our findings, it is advantageous to optimise the source selection. The cataloguing of SPARQL endpoints, including type and property indexing, leads to efficient querying of data resources over the Web of Data. This could even be further improved through the use of ontologies, e.g., for abstract normalisation of query terms.", "title": "" }, { "docid": "e34ad4339934d9b9b4019fad37f8dd4e", "text": "This paper presents a technique for estimating the threedimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters.", "title": "" }, { "docid": "c112d026a15e2ace201b12fa8ac98fe6", "text": "Disturbance of mineral status in patients with chronic renal failure (CRF) is one of many complications of this disease. Trace elements analysis in hair is sometime used by clinicians for a diagnosis of mineral status. In the present study concentration of magnesium and other trace elements was determined in serum, erythrocytes, and hair of patients with CRF undergoing hemodialysis (n = 31) and with impaired renal function but non-dialyzed (n = 15). Measurements of mineral content were performed by the atomic absorption spectrometry method (AAS). In serum of hemodialyzed patients as well as in erythrocytes and hair we found significantly increased levels of almost all tested elements, especially for Mg, Al, and Cr, compared to the control group. No significant differences were observed between these groups only in the Cd content in the examined samples. However, a significant correlation between its concentration in serum and erythrocytes was only found in the case of this element. Hair analysis reflected well the changes of mineral distribution in patients with CRF and may be used to diagnose these anomalies, in particular, with regard to Ca, Mg, Fe, and Cr. However, a strong variability of the concentration for these elements was found. In conclusion, our results confirm that renal failure as well as dialysis provoke imbalances of elemental status in physiological fluids and tissues, which should be monitored.", "title": "" }, { "docid": "40e1ead45e4b5328c76ec991a1e8a81b", "text": "This paper presents a game-theoretic and learning approach to security risk management based on a model that captures the diffusion of risk in an organization with multiple technical and business processes. Of particular interest is the way the interdependencies between processes affect the evolution of the organization's risk profile as time progresses, which is first developed as a probabilistic risk framework and then studied within a discrete Markov model. Using zero-sum dynamic Markov games, we analyze the interaction between a malicious adversary whose actions increases the risk level of the organization and a defender agent, e.g. security and risk management division of the organization, which aims to mitigate risks. We derive min-max (saddle point) solutions of this game to obtain the optimal risk management strategies for the organization to achieve a certain level of performance. This methodology also applies to worst-case scenario analysis where the adversary can be interpreted as a nature player in the game. In practice, the parameters of the Markov game may not be known due to the costly nature of collecting and processing information about the adversary as well an organization with many components itself. We apply ideas from Q-learning to analyze the behavior of the agents when little information is known about the environment in which the attacker and defender interact. The framework developed and results obtained are illustrated with a small example scenario and numerical analysis.", "title": "" }, { "docid": "6875d41e412d71f45d6d4ea43697ed80", "text": "Context Emergency department visits by older adults are often due to adverse drug events, but the proportion of these visits that are the result of drugs designated as inappropriate for use in this population is unknown. Contribution Analyses of a national surveillance study of adverse drug events and a national outpatient survey estimate that Americans age 65 years or older have more than 175000 emergency department visits for adverse drug events yearly. Three commonly prescribed drugs accounted for more than one third of visits: warfarin, insulin, and digoxin. Caution The study was limited to adverse events in the emergency department. Implication Strategies to decrease adverse drug events among older adults should focus on warfarin, insulin, and digoxin. The Editors Adverse drug events cause clinically significant morbidity and mortality and are associated with large economic costs (15). They are common in older adults, regardless of whether they live in the community, reside in long-term care facilities, or are hospitalized (59). Most physicians recognize that prescribing medications to older patients requires special considerations, but nongeriatricians are typically unfamiliar with the most commonly used measure of medication appropriateness for older patients: the Beers criteria (1012). The Beers criteria are a consensus-based list of medications identified as potentially inappropriate for use in older adults. The criteria were introduced in 1991 to help researchers evaluate prescription quality in nursing homes (10). The Beers criteria were updated in 1997 and 2003 to apply to all persons age 65 years or older, to include new medications judged to be ineffective or to pose unnecessarily high risk, and to rate the severity of adverse outcomes (11, 12). Prescription rates of Beers criteria medications have become a widely used measure of quality of care for older adults in research studies in the United States and elsewhere (1326). The application of the Beers criteria as a measure of health care quality and safety has expanded beyond research studies. The Centers for Medicare & Medicaid Services incorporated the Beers criteria into federal safety regulations for long-term care facilities in 1999 (27). The prescription rate of potentially inappropriate medications is one of the few medication safety measures in the National Healthcare Quality Report (28) and has been introduced as a Health Plan and Employer Data and Information Set quality measure for managed care plans (29). Despite widespread adoption of the Beers criteria to measure prescription quality and safety, as well as proposals to apply these measures to additional settings, such as medication therapy management services under Medicare Part D (30), population-based data on the effect of adverse events from potentially inappropriate medications are sparse and do not compare the risks for adverse events from Beers criteria medications against those from other medications (31, 32). Adverse drug events that lead to emergency department visits are clinically significant adverse events (5) and result in increased health care resource utilization and expense (6). We used nationally representative public health surveillance data to estimate the number of emergency department visits for adverse drug events involving Beers criteria medications and compared the number with that for adverse drug events involving other medications. We also estimated the frequency of outpatient prescription of Beers criteria medications and other medications to calculate and compare the risks for emergency department visits for adverse drug events per outpatient prescription visit. Methods Data Sources National estimates of emergency department visits for adverse drug events were based on data from the 58 nonpediatric hospitals participating in the National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance (NEISS-CADES) System, a nationally representative, size-stratified probability sample of hospitals (excluding psychiatric and penal institutions) in the United States and its territories with a minimum of 6 beds and a 24-hour emergency department (Figure 1) (3335). As described elsewhere (5, 34), trained coders at each hospital reviewed clinical records of every emergency department visit to report physician-diagnosed adverse drug events. Coders reported clinical diagnosis, medication implicated in the adverse event, and narrative descriptions of preceding circumstances. Data collection, management, quality assurance, and analyses were determined to be public health surveillance activities by the Centers for Disease Control and Prevention (CDC) and U.S. Food and Drug Administration human subjects oversight bodies and, therefore, did not require human subject review or institutional review board approval. Figure 1. Data sources and descriptions. NAMCS= National Ambulatory Medical Care Survey (36); NEISS-CADES= National Electronic Injury Surveillance SystemCooperative Adverse Drug Event Surveillance System (5, 3335); NHAMCS = National Hospital Ambulatory Medical Care Survey (37). *The NEISS-CADES is a 63-hospital national probability sample, but 5 pediatric hospitals were not included in this analysis. National estimates of outpatient prescription were based on 2 cross-sectional surveys, the National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS), designed to provide information on outpatient office visits and visits to hospital outpatient clinics and emergency departments (Figure 1) (36, 37). These surveys have been previously used to document the prescription rates of inappropriate medications (17, 3840). Definition of Potentially Inappropriate Medications The most recent iteration of the Beers criteria (12) categorizes 41 medications or medication classes as potentially inappropriate under any circumstances (always potentially inappropriate) and 7 medications or medication classes as potentially inappropriate when used in certain doses, frequencies, or durations (potentially inappropriate in certain circumstances). For example, ferrous sulfate is considered to be potentially inappropriate only when used at dosages greater than 325 mg/d, but not potentially inappropriate if used at lower dosages. For this investigation, we included the Beers criteria medications listed in Table 1. Because medication dose, duration, and frequency were not always available in NEISS-CADES and are not reported in NAMCS and NHAMCS, we included medications regardless of dose, duration, or frequency of use. We excluded 3 medications considered to be potentially inappropriate when used in specific formulations (short-acting nifedipine, short-acting oxybutynin, and desiccated thyroid) because NEISS-CADES, NAMCS, and NHAMCS do not reliably identify these formulations. Table 1. Potentially Inappropriate Medications for Individuals Age 65 Years or Older The updated Beers criteria identify additional medications as potentially inappropriate if they are prescribed to patients who have certain preexisting conditions. We did not include these medications because they have rarely been used in previous studies or safety measures and NEISS-CADES, NAMCS, and NHAMCS do not reliably identify preexisting conditions. Identification of Emergency Department Visits for Adverse Drug Events We defined an adverse drug event case as an incident emergency department visit by a patient age 65 years or older, from 1 January 2004 to 31 December 2005, for a condition that the treating physician explicitly attributed to the use of a drug or for a drug-specific effect (5). Adverse events include allergic reactions (immunologically mediated effects) (41), adverse effects (undesirable pharmacologic or idiosyncratic effects at recommended doses) (41), unintentional overdoses (toxic effects linked to excess dose or impaired excretion) (41), or secondary effects (such as falls and choking). We excluded cases of intentional self-harm, therapeutic failures, therapy withdrawal, drug abuse, adverse drug events that occurred as a result of medical treatment received during the emergency department visit, and follow-up visits for a previously diagnosed adverse drug event. We defined an adverse drug event from Beers criteria medications as an emergency department visit in which a medication from Table 1 was implicated. Identification of Outpatient Prescription Visits We used the NAMCS and NHAMCS public use data files for the most recent year available (2004) to identify outpatient prescription visits. We defined an outpatient prescription visit as any outpatient office, hospital clinic, or emergency department visit at which treatment with a medication of interest was either started or continued. We identified medications by generic name for those with a single active ingredient and by individual active ingredients for combination products. We categorized visits with at least 1 medication identified in Table 1 as involving Beers criteria medications. Statistical Analysis Each NEISS-CADES, NAMCS, and NHAMCS case is assigned a sample weight on the basis of the inverse probability of selection (33, 4244). We calculated national estimates of emergency department visits and prescription visits by summing the corresponding sample weights, and we calculated 95% CIs by using the SURVEYMEANS procedure in SAS, version 9.1 (SAS Institute, Cary, North Carolina), to account for the sampling strata and clustering by site. To obtain annual estimates of visits for adverse events, we divided NEISS-CADES estimates for 20042005 and corresponding 95% CI end points by 2. Estimates based on small numbers of cases (<20 cases for NEISS-CADES and <30 cases for NAMCS and NHAMCS) or with a coefficient of variation greater than 30% are considered statistically unstable and are identified in the tables. To estimate the risk for adverse events relative to outpatient prescription", "title": "" }, { "docid": "215dc8ac0f9e30ff4bb7da1cc1996a21", "text": "Social neuroscience benefits from the experimental manipulation of neuronal activity. One possible manipulation, neurofeedback, is an operant conditioning-based technique in which individuals sense, interact with, and manage their own physiological and mental states. Neurofeedback has been applied to a wide variety of psychiatric illnesses, as well as to treat sub-clinical symptoms, and even to enhance performance in healthy populations. Despite growing interest, there persists a level of distrust and/or bias in the medical and research communities in the USA toward neurofeedback and other functional interventions. As a result, neurofeedback has been largely ignored, or disregarded within social neuroscience. We propose a systematic, empirically-based approach for assessing the effectiveness, and utility of neurofeedback. To that end, we use the term perturbative physiologic plasticity to suggest that biological systems function as an integrated whole that can be perturbed and guided, either directly or indirectly, into different physiological states. When the intention is to normalize the system, e.g., via neurofeedback, we describe it as self-directed neuroplasticity, whose outcome is persistent functional, structural, and behavioral changes. We argue that changes in physiological, neuropsychological, behavioral, interpersonal, and societal functioning following neurofeedback can serve as objective indices and as the metrics necessary for assessing levels of efficacy. In this chapter, we examine the effects of neurofeedback on functional connectivity in a few clinical disorders as case studies for this approach. We believe this broader perspective will open new avenues of investigation, especially within social neuroscience, to further elucidate the mechanisms and effectiveness of these types of interventions, and their relevance to basic research.", "title": "" }, { "docid": "57cb465ba54502fd5685f37b37812d71", "text": "Solving logistic regression with L1-regularization in distributed settings is an important problem. This problem arises when training dataset is very large and cannot fit the memory of a single machine. We present d-GLMNET, a new algorithm solving logistic regression with L1-regularization in the distributed settings. We empirically show that it is superior over distributed online learning via truncated gradient.", "title": "" }, { "docid": "4191648ada97ecc5a906468369c12bf4", "text": "Dermoscopy is a widely used technique whose role in the clinical (and preoperative) diagnosis of melanocytic and non-melanocytic skin lesions has been well established in recent years. The aim of this paper is to clarify the correlations between the \"local\" dermoscopic findings in melanoma and the underlying histology, in order to help clinicians in routine practice.", "title": "" }, { "docid": "6291f21727c70d3455a892a8edd3b18c", "text": "Given a single column of values, existing approaches typically employ regex-like rules to detect errors by finding anomalous values inconsistent with others. Such techniques make local decisions based only on values in the given input column, without considering a more global notion of compatibility that can be inferred from large corpora of clean tables. We propose \\sj, a statistics-based technique that leverages co-occurrence statistics from large corpora for error detection, which is a significant departure from existing rule-based methods. Our approach can automatically detect incompatible values, by leveraging an ensemble of judiciously selected generalization languages, each of which uses different generalizations and is sensitive to different types of errors. Errors so detected are based on global statistics, which is robust and aligns well with human intuition of errors. We test \\sj on a large set of public Wikipedia tables, as well as proprietary enterprise Excel files. While both of these test sets are supposed to be of high-quality, \\sj makes surprising discoveries of over tens of thousands of errors in both cases, which are manually verified to be of high precision (over 0.98). Our labeled benchmark set on Wikipedia tables is released for future research.", "title": "" }, { "docid": "077287f3cdf841d7998c35ec13568645", "text": "We present an approach for blind image deblurring, which handles non-uniform blurs. Our algorithm has two main components: (i) A new method for recovering the unknown blur-field directly from the blurry image, and (ii) A method for deblurring the image given the recovered non-uniform blur-field. Our blur-field estimation is based on analyzing the spectral content of blurry image patches by Re-blurring them. Being unrestricted by any training data, it can handle a large variety of blur sizes, yielding superior blur-field estimation results compared to training-based deep-learning methods. Our non-uniform deblurring algorithm is based on the internal image-specific patch-recurrence prior. It attempts to recover a sharp image which, on one hand – results in the blurry image under our estimated blur-field, and on the other hand – maximizes the internal recurrence of patches within and across scales of the recovered sharp image. The combination of these two components gives rise to a blind-deblurring algorithm, which exceeds the performance of state-of-the-art CNN-based blind-deblurring by a significant margin, without the need for any training data.", "title": "" }, { "docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d", "text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.", "title": "" }, { "docid": "df00815ab7f96a286ca336ecd85ed821", "text": "In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to O(K + log n) for tree-sparse data instead of O(K +K log n) for standard K-sparse data with length n. However, few of existing algorithms have utilized this for CS-MRI, while most of them model the problem with total variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparse regularization, but few of them have validated the benefit of wavelet tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI. Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms.", "title": "" }, { "docid": "fb6fabe03dd309e07e20d9b235384dc8", "text": "Unmanned Aircraft Systems (UAS) is an emerging technology with a tremendous potential to revolutionize warfare and to enable new civilian applications. It is integral part of future urban civil and military applications. It technologically matures enough to be integrated into civil society. The importance of UAS in scientific applications has been thoroughly demonstrated in recent years (DoD, 2010). Whatever missions are chosen for the UAS, their number and use will significantly increase in the future. UAS today play an increasing role in many public missions such as border surveillance, wildlife surveys, military training, weather monitoring, and local law enforcement. Challenges such as the lack of an on-board pilot to see and avoid other aircraft and the wide variation in unmanned aircraft missions and capabilities must be addressed in order to fully integrate UAS operations in the NAS in the Next Gen time frame. UAVs are better suited for dull, dirty, or dangerous missions than manned aircraft. UAS are mainly used for intelligence, surveillance and reconnaissance (ISR), border security, counter insurgency, attack and strike, target identification and designation, communications relay, electronic attack, law enforcement and security applications, environmental monitoring and agriculture, remote sensing, aerial mapping and meteorology. Although armed forces around the world continue to strongly invest in researching and developing technologies with the potential to advance the capabilities of UAS.", "title": "" }, { "docid": "36d0776ad44592db640bd205acee8e39", "text": "1. A review of the literature shows that in nearly all cases tropical rain forest fragmentation has led to a local loss of species. Isolated fragments suffer eductions in species richness with time after excision from continuous forest, and small fragments often have fewer species recorded for the same effort of observation than large fragments orareas of continuous forest. 2. Birds have been the most frequently studied taxonomic group with respect o the effects of tropical forest fragmentation. 3. The mechanisms of fragmentation-related extinction i clude the deleterious effects of human disturbance during and after deforestation, the reduction of population sizes, the reduction of immigration rates, forest edge effects, changes in community structure (secondand higher-order effects) and the immigration fexotic species. 4. The relative importance of these mechanisms remains obscure. 5. Animals that are large, sparsely or patchily distributed, orvery specialized and intolerant of the vegetation surrounding fragments, are particularly prone to local extinction. 6. The large number of indigenous pecies that are very sparsely distributed and intolerant of conditions outside the forest make evergreen tropical rain forest particularly susceptible to species loss through fragmentation. 7. Much more research is needed to study what is probably the major threat o global biodiversity.", "title": "" }, { "docid": "4591003089a1ccecd46fb1ac80ab3bb7", "text": "Pre-season rugby training develops the physical requisites for competition and consists of a high volume of resistance training and anaerobic and aerobic conditioning. However, the effects of a rugby union pre-season in professional athletes are currently unknown. Therefore, the purpose of this investigation was to determine the effects of a 4-week pre-season on 33 professional rugby union players. Bench press and box squat increased moderately (13.6 kg, 90% confidence limits +/-2.9 kg and 17.6 +/- 8.0 kg, respectively) over the training phase. Small decreases in bench throw (70.6 +/- 53.5 W), jump squat (280.1 +/- 232.4 W), and fat mass (1.4 +/- 0.4 kg) were observed. In addition, small increases were seen in fat-free mass (2.0 +/- 0.6 kg) and flexed upper-arm girth (0.6 +/- 0.2 cm), while moderate increases were observed in mid-thigh girth (1.9 +/- 0.5 cm) and perception of fatigue (0.6 +/- 0.4 units). Increases in strength and body composition were observed in elite rugby union players after 4 weeks of intensive pre-season training, but this may have been the result of a return to fitness levels prior to the off-season. Decreases in power may reflect high training volumes and increases in perceived of fatigue.", "title": "" }, { "docid": "dd38d76f208d26e681c00f63b50492e5", "text": "An anti-louse shampoo (Licener®) based on a neem seed extract was tested in vivo and in vitro on its efficacy to eliminate head louse infestation by a single treatment. The hair of 12 children being selected from a larger group due to their intense infestation with head lice were incubated for 10 min with the neem seed extract-containing shampoo. It was found that after this short exposition period, none of the lice had survived, when being observed for 22 h. In all cases, more than 50–70 dead lice had been combed down from each head after the shampoo had been washed out with normal tap water. A second group of eight children had been treated for 20 min with identical results. Intense combing of the volunteers 7 days after the treatment did not result in the finding of any motile louse neither in the 10-min treated group nor in the group the hair of which had been treated for 20 min. Other living head lice were in vitro incubated within the undiluted product (being placed inside little baskets the floor of which consisted of a fine net of gauze). It was seen that a total submersion for only 3 min prior to washing 3× for 2 min with tap water was sufficient to kill all motile stages (larvae and adults). The incubation of nits at 30°C into the undiluted product for 3, 10, and 20 min did not show differences. In all cases, there was no eyespot development or hatching larvae within 7–10 days of observation. This and the fact that the hair of treated children (even in the short-time treated group of only 10 min) did not reveal freshly hatched larval stages of lice indicate that there is an ovicidal activity of the product, too.", "title": "" }, { "docid": "adeb7bdbe9e903ae7041f93682b0a27c", "text": "Self -- Management systems are the main objective of Autonomic Computing (AC), and it is needed to increase the running system's reliability, stability, and performance. This field needs to investigate some issues related to complex systems such as, self-awareness system, when and where an error state occurs, knowledge for system stabilization, analyze the problem, healing plan with different solutions for adaptation without the need for human intervention. This paper focuses on self-healing which is the most important component of Autonomic Computing. Self-healing is a technique that aims to detect, analyze, and repair existing faults within the system. All of these phases are accomplished in real-time system. In this approach, the system is capable of performing a reconfiguration action in order to recover from a permanent fault. Moreover, self-healing system should have the ability to modify its own behavior in response to changes within the environment. Recursive neural network has been proposed and used to solve the main challenges of self-healing, such as monitoring, interpretation, resolution, and adaptation.", "title": "" }, { "docid": "b0c694eb683c9afb41242298fdd4cf63", "text": "We have demonstrated 8.5-11.5 GHz class-E MMIC high-power amplifiers (HPAs) with a peak power-added-efficiency (PAE) of 61% and drain efficiency (DE) of 70% with an output power of 3.7 W in a continuous-mode operation. At 5 W output power, PAE and DE of 58% and 67% are measured, respectively, which implies MMIC power density of 5 W/mm at Vds = 30 V. The peak gain is 11 dB, with an associated gain of 9 dB at the peak PAE. At an output power of 9 W, DE and PAE of 59% and 51 % were measured, respectively. In order to improve the linearity, we have designed and simulated X-band class-E MMIC PAs similar to a Doherty configuration. The Doherty-based class-E amplifiers show an excellent cancellation of a third-order intermodulation product (IM3), which improved the simulated two-tone linearity C/IM3 to >; 50 dBc.", "title": "" }, { "docid": "032f444d4844c4fa9a3e948cbbc0818a", "text": "This paper presents a microstrip dual-band bandpass filter (BPF) based on cross-shaped resonator and spurline. It is shown that spurlines added into input/output ports of a cross-shaped resonator generate an additional notch band. Using even and odd-mode analysis the proposed structure is realized and designed. The proposed bandpass filter has dual passband from 1.9 GHz to 2.4 GHz and 9.5 GHz to 11.5 GHz.", "title": "" } ]
scidocsrr
a7bbea069feaed269fc9caf24cc3c6a0
Architectural support for SWAR text processing with parallel bit streams: the inductive doubling principle
[ { "docid": "8fde46517d705da12fb43ce110a27a0f", "text": "Parabix (parallel bit streams for XML) is an open-source XML parser that employs the SIMD (single-instruction multiple-data) capabilities of modern-day commodity processors to deliver dramatic performance improvements over traditional byte-at-a-time parsing technology. Byte-oriented character data is first transformed to a set of 8 parallel bit streams, each stream comprising one bit per character code unit. Character validation, transcoding and lexical item stream formation are all then carried out in parallel using bitwise logic and shifting operations. Byte-at-a-time scanning loops in the parser are replaced by bit scan loops that can advance by as many as 64 positions with a single instruction.\n A performance study comparing Parabix with the open-source Expat and Xerces parsers is carried out using the PAPI toolkit. Total CPU cycle counts, level 2 data cache misses and branch mispredictions are measured and compared for each parser. The performance of Parabix is further studied with a breakdown of the cycle counts across the core components of the parser. Prospects for further performance improvements are also outlined, with a particular emphasis on leveraging the intraregister parallelism of SIMD processing to enable intrachip parallelism on multicore architectures.", "title": "" } ]
[ { "docid": "bf1ba6901d6c64a341ba1491c6c2c3c9", "text": "The present research proposes schema congruity as a theoretical basis for examining the effectiveness and consequences of product anthropomorphism. Results of two studies suggest that the ability of consumers to anthropomorphize a product and their consequent evaluation of that product depend on the extent to which that product is endowed with characteristics congruent with the proposed human schema. Furthermore, consumers’ perception of the product as human mediates the influence of feature type on product evaluation. Results of a third study, however, show that the affective tag attached to the specific human schema moderates the evaluation but not the successful anthropomorphizing of theproduct.", "title": "" }, { "docid": "7b99f2b0c903797c5ed33496f69481fc", "text": "Dance imagery is a consciously created mental representation of an experience, either real or imaginary, that may affect the dancer and her or his movement. In this study, imagery research in dance was reviewed in order to: 1. describe the themes and ideas that the current literature has attempted to illuminate and 2. discover the extent to which this literature fits the Revised Applied Model of Deliberate Imagery Use. A systematic search was performed, and 43 articles from 24 journals were found to fit the inclusion criteria. The articles were reviewed, analyzed, and categorized. The findings from the articles were then reported using the Revised Applied Model as a framework. Detailed descriptions of Who, What, When and Where, Why, How, and Imagery Ability were provided, along with comparisons to the field of sports imagery. Limitations within the field, such as the use of non-dance-specific and study-specific measurements, make comparisons and clear conclusions difficult to formulate. Future research can address these problems through the creation of dance-specific measurements, higher participant rates, and consistent methodologies between studies.", "title": "" }, { "docid": "7f3686b783273c4df7c4fb41fe7ccefd", "text": "Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "345e46da9fc01a100f10165e82d9ca65", "text": "We present a new theoretical framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.", "title": "" }, { "docid": "a4f0b524f79db389c72abd27d36f8944", "text": "In order to summarize the status of rescue robotics, this chapter will cover the basic characteristics of disasters and their impact on robotic design, describe the robots actually used in disasters to date, promising robot designs (e.g., snakes, legged locomotion) and concepts (e.g., robot teams or swarms, sensor networks), methods of evaluation in benchmarks for rescue robotics, and conclude with a discussion of the fundamental problems and open issues facing rescue robotics, and their evolution from an interesting idea to widespread adoption. The Chapter will concentrate on the rescue phase, not recovery, with the understanding that capabilities for rescue can be applied to, and extended for, the recovery phase. The use of robots in the prevention and preparedness phases of disaster management are outside the scope of this chapter.", "title": "" }, { "docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04", "text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.", "title": "" }, { "docid": "518dc6882c6e13352c7b41f23dfd2fad", "text": "The Diagnostic and Statistical Manual of Mental Disorders (DSM) is considered to be the gold standard manual for assessing the psychiatric diseases and is currently in its fourth version (DSM-IV), while a fifth (DSM-V) has just been released in May 2013. The DSM-V Anxiety Work Group has put forward recommendations to modify the criteria for diagnosing specific phobias. In this manuscript, we propose to consider the inclusion of nomophobia in the DSM-V, and we make a comprehensive overview of the existing literature, discussing the clinical relevance of this pathology, its epidemiological features, the available psychometric scales, and the proposed treatment. Even though nomophobia has not been included in the DSM-V, much more attention is paid to the psychopathological effects of the new media, and the interest in this topic will increase in the near future, together with the attention and caution not to hypercodify as pathological normal behaviors.", "title": "" }, { "docid": "917ab22adee174259bef5171fe6f14fb", "text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.", "title": "" }, { "docid": "43f5d21de3421564a7d5ecd6c074ea0a", "text": "Epithelial-mesenchymal transition (EMT) is an important process in embryonic development, fibrosis, and cancer metastasis. During cancer progression, the activation of EMT permits cancer cells to acquire migratory, invasive, and stem-like properties. A growing body of evidence supports the critical link between EMT and cancer stemness. However, contradictory results have indicated that the inhibition of EMT also promotes cancer stemness, and that mesenchymal-epithelial transition, the reverse process of EMT, is associated with the tumor-initiating ability required for metastatic colonization. The concept of 'intermediate-state EMT' provides a possible explanation for this conflicting evidence. In addition, recent studies have indicated that the appearance of 'hybrid' epithelial-mesenchymal cells is favorable for the establishment of metastasis. In summary, dynamic changes or plasticity between the epithelial and the mesenchymal states rather than a fixed phenotype is more likely to occur in tumors in the clinical setting. Further studies aimed at validating and consolidating the concept of intermediate-state EMT and hybrid tumors are needed for the establishment of a comprehensive profile of cancer metastasis.", "title": "" }, { "docid": "a1f29ac1db0745a61baf6995459c02e7", "text": "Adolescence is a developmental period characterized by suboptimal decisions and actions that give rise to an increased incidence of unintentional injuries and violence, alcohol and drug abuse, unintended pregnancy and sexually transmitted diseases. Traditional neurobiological and cognitive explanations for adolescent behavior have failed to account for the nonlinear changes in behavior observed during adolescence, relative to childhood and adulthood. This review provides a biologically plausible conceptualization of the neural mechanisms underlying these nonlinear changes in behavior, as a heightened responsiveness to incentives while impulse control is still relatively immature during this period. Recent human imaging and animal studies provide a biological basis for this view, suggesting differential development of limbic reward systems relative to top-down control systems during adolescence relative to childhood and adulthood. This developmental pattern may be exacerbated in those adolescents with a predisposition toward risk-taking, increasing the risk for poor outcomes.", "title": "" }, { "docid": "f437862098dac160f3a3578baeb565a2", "text": "Techniques for modeling and simulating channel conditions play an essential role in understanding network protocol and application behavior. In [11], we demonstrated that inaccurate modeling using a traditional analytical model yielded significant errors in error control protocol parameters choices. In this paper, we demonstrate that time-varying effects on wireless channels result in wireless traces which exhibit non-stationary behavior over small window sizes. We then present an algorithm that divides traces into stationary components in order to provide analytical channel models that, relative to traditional approaches, more accurately represent characteristics such as burstiness, statistical distribution of errors, and packet loss processes. Our algorithm also generates artificial traces with the same statistical characteristics as actual collected network traces. For validation, we develop a channel model for the circuit-switched data service in GSM and show that it: (1) more closely approximates GSM channel characteristics than a traditional Gilbert model and (2) generates artificial traces that closely match collected traces' statistics. Using these traces in a simulator environment enables future protocol and application testing under different controlled and repeatable conditions.", "title": "" }, { "docid": "c3c36535a6dbe74165c0e8b798ac820f", "text": "Multiplier, being a very vital part in the design of microprocessor, graphical systems, multimedia systems, DSP system etc. It is very important to have an efficient design in terms of performance, area, speed of the multiplier, and for the same Booth's multiplication algorithm provides a very fundamental platform for all the new advances made for high end multipliers meant for faster multiplication with higher performance. The algorithm provides an efficient encoding of the bits during the first steps of the multiplication process. In pursuit of the same, Radix 4 booths encoding has increased the performance of the multiplier by reducing the number of partial products generated. Radix 4 Booths algorithm produces both positive and negative partial products and implementing the negative partial product nullifies the advances made in different units to some extent if not fully. Most of the research work focuses on the reduction of the number of partial products generated and making efficient implementation of the algorithm. There is very little work done on disposal of the negative partial products generated. The presented work in the paper addresses the issue of disposal of the negative partial products efficiently by computing the 2's complement avoiding the additional adder for adding 1 and generation of long carry chain, hence. The proposed mechanism also continues to support the concept of reducing the partial product and in persuasion of the same it is able to reduce the number of partial product and also improved further from n/2 +1 partial products achieved via modified booths algorithm to n/2. Also, while implementing the proposed mechanism using Verilog HDL, a mode selection capability is provided, enabling the same hardware to act as multiplier and as a simple two's complement calculator using the proposed mechanism. The proposed technique has added advantage in terms of its independentness of the number of bits to be multiplied. It is tested and verified with varied test vectors of different number bit sets. Xilinx synthesis tool is used for synthesis and the multiplier mechanism has a maximum operating frequency of 14.59 MHz and a delay of 7.013 ns.", "title": "" }, { "docid": "db907780a2022761d2595a8ad5d03401", "text": "This letter is concerned with the stability analysis of neural networks (NNs) with time-varying interval delay. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay/interval-dependent stability criteria for NNs with time-varying interval delay are proposed. Numerical examples are given to demonstrate the effectiveness and the merits of the proposed method.", "title": "" }, { "docid": "8fc560987781afbb25f47eb560176e2c", "text": "Liposomes are microparticulate lipoidal vesicles which are under extensive investigation as drug carriers for improving the delivery of therapeutic agents. Due to new developments in liposome technology, several liposomebased drug formulations are currently in clinical trial, and recently some of them have been approved for clinical use. Reformulation of drugs in liposomes has provided an opportunity to enhance the therapeutic indices of various agents mainly through alteration in their biodistribution. This review discusses the potential applications of liposomes in drug delivery with examples of formulations approved for clinical use, and the problems associated with further exploitation of this drug delivery system. © 1997 Elsevier Science B.V.", "title": "" }, { "docid": "c1aa687c4a48cfbe037fe87ed4062dab", "text": "This paper deals with the modelling and control of a single sided linear switched reluctance actuator. This study provide a presentation of modelling and proposes a study on open and closed loop controls for the studied motor. From the proposed model, its dynamic behavior is described and discussed in detail. In addition, a simpler controller based on PID regulator is employed to upgrade the dynamic behavior of the motor. The simulation results in closed loop show a significant improvement in dynamic response compared with open loop. In fact, this simple type of controller offers the possibility to improve the dynamic response for sliding door application.", "title": "" }, { "docid": "be82da372c061ef3029273bfc91a9e0a", "text": "Search and rescue missions and surveillance require finding targets in a large area. These tasks often use unmanned aerial vehicles (UAVs) with cameras to detect and move towards a target. However, common UAV approaches make two simplifying assumptions. First, they assume that observations made from different heights are deterministically correct. In practice, observations are noisy, with the noise increasing as the height used for observations increases. Second, they assume that a motion command executes correctly, which may not happen due to wind and other environmental factors. To address these, we propose a sequential algorithm that determines actions in real time based on observations, using partially observable Markov decision processes (POMDPs). Our formulation handles both observations and motion uncertainty and errors. We run offline simulations and learn a policy. This policy is run on a UAV to find the target efficiently. We employ a novel compact formulation to represent the coordinates of the drone relative to the target coordinates. Our POMDP policy finds the target up to 3.4 times faster when compared to a heuristic policy.", "title": "" }, { "docid": "4239f9110973888c7eded81037c056b3", "text": "The role of epistasis in the genetic architecture of quantitative traits is controversial, despite the biological plausibility that nonlinear molecular interactions underpin the genotype–phenotype map. This controversy arises because most genetic variation for quantitative traits is additive. However, additive variance is consistent with pervasive epistasis. In this Review, I discuss experimental designs to detect the contribution of epistasis to quantitative trait phenotypes in model organisms. These studies indicate that epistasis is common, and that additivity can be an emergent property of underlying genetic interaction networks. Epistasis causes hidden quantitative genetic variation in natural populations and could be responsible for the small additive effects, missing heritability and the lack of replication that are typically observed for human complex traits.", "title": "" }, { "docid": "5565f51ad8e1aaee43f44917befad58a", "text": "We explore the application of deep residual learning and dilated convolutions to the keyword spotting task, using the recently-released Google Speech Commands Dataset as our benchmark. Our best residual network (ResNet) implementation significantly outperforms Google's previous convolutional neural networks in terms of accuracy. By varying model depth and width, we can achieve compact models that also outperform previous small-footprint variants. To our knowledge, we are the first to examine these approaches for keyword spotting, and our results establish an open-source state-of-the-art reference to support the development of future speech-based interfaces.", "title": "" }, { "docid": "1b47dffdff3825ad44a0430311e2420b", "text": "The present paper describes the SSM algorithm of protein structure comparison in three dimensions, which includes an original procedure of matching graphs built on the protein's secondary-structure elements, followed by an iterative three-dimensional alignment of protein backbone Calpha atoms. The SSM results are compared with those obtained from other protein comparison servers, and the advantages and disadvantages of different scores that are used for structure recognition are discussed. A new score, balancing the r.m.s.d. and alignment length Nalign, is proposed. It is found that different servers agree reasonably well on the new score, while showing considerable differences in r.m.s.d. and Nalign.", "title": "" }, { "docid": "9c16f3ccaab4e668578e3eda7d452ebd", "text": "Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines. Recently, researchers have demonstrated powerful attacks against machine learning models that can fool them to produce incorrect results. However, nearly all previous research in adversarial attacks has focused on image recognition and object detection models. In this short paper, we present a first of its kind demonstration of adversarial attacks against speech classification model. Our algorithm performs targeted attacks with 87% success by adding small background noise without having to know the underlying model parameter and architecture. Our attack only changes the least significant bits of a subset of audio clip samples, and the noise does not change 89% the human listener’s perception of the audio clip as evaluated in our human study.", "title": "" } ]
scidocsrr
a1fec1fe18c288d3580ef83c567b7e69
Cross-Dataset Recognition: A Survey
[ { "docid": "65901a189e87983dfd01db0161106a86", "text": "The presence of bias in existing object recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given limited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset. We find that it is beneficial to explicitly account for bias when combining multiple datasets. (For more details refer to [3] and http://undoingbias.csail.mit.edu)", "title": "" }, { "docid": "8fe9ab612f31d349e881550d8c99a446", "text": "This paper investigates a new machine learning strategy cal led translated learning. Unlike many previous learning tasks, we focus on how to use l ab led data from one feature space to enhance the classification of other entirely different learning spaces. For example, we might wish to use labeled te xt ata to help learn a model for classifying image data, when the labeled images a r difficult to obtain. An important aspect of translated learning is to build a “bridge” to link one feature space (known as the “source space”) to another space (known as the “target space”) through a translator in order to migrate the know ledge from source to target. The translated learning solution uses a language mo del t link the class labels to the features in the source spaces, which in turn is t an lated to the features in the target spaces. Finally, this chain of linkages i s completed by tracing back to the instances in the target spaces. We show that this p ath of linkage can be modeled using a Markov chain and risk minimization. Throu gh experiments on the text-aided image classification and cross-language c l ssification tasks, we demonstrate that our translated learning framework can gre atly outperform many state-of-the-art baseline methods.", "title": "" } ]
[ { "docid": "9bbb8ff8e8d498709ee68c6797b00588", "text": "Studies often report that bilingual participants possess a smaller vocabulary in the language of testing than monolinguals, especially in research with children. However, each study is based on a small sample so it is difficult to determine whether the vocabulary difference is due to sampling error. We report the results of an analysis of 1,738 children between 3 and 10 years old and demonstrate a consistent difference in receptive vocabulary between the two groups. Two preliminary analyses suggest that this difference does not change with different language pairs and is largely confined to words relevant to a home context rather than a school context.", "title": "" }, { "docid": "c824b5274ce6afb54c58fae2dd68ff8f", "text": "User modeling plays an important role in delivering customized web services to the users and improving their engagement. However, most user models in the literature do not explicitly consider the temporal behavior of users. More recently, continuous-time user modeling has gained considerable attention and many user behavior models have been proposed based on temporal point processes. However, typical point process-based models often considered the impact of peer influence and content on the user participation and neglected other factors. Gamification elements are among those factors that are neglected, while they have a strong impact on user participation in online services. In this article, we propose interdependent multi-dimensional temporal point processes that capture the impact of badges on user participation besides the peer influence and content factors. We extend the proposed processes to model user actions over the community-based question and answering websites, and propose an inference algorithm based on Variational-Expectation Maximization that can efficiently learn the model parameters. Extensive experiments on both synthetic and real data gathered from Stack Overflow show that our inference algorithm learns the parameters efficiently and the proposed method can better predict the user behavior compared to the alternatives.", "title": "" }, { "docid": "379bc1336026fab6225e39b6c69d55a0", "text": "We show that a recurrent neural network is able to learn a model to represent sequences of communications between computers on a network and can be used to identify outlier network traffic. Defending computer networks is a challenging problem and is typically addressed by manually identifying known malicious actor behavior and then specifying rules to recognize such behavior in network communications. However, these rule-based approaches often generalize poorly and identify only those patterns that are already known to researchers. An alternative approach that does not rely on known malicious behavior patterns can potentially also detect previously unseen patterns. We tokenize and compress netflow into sequences of “words” that form “sentences” representative of a conversation between computers. These sentences are then used to generate a model that learns the semantic and syntactic grammar of the newly generated language. We use Long-Short-Term Memory (LSTM) cell Recurrent Neural Networks (RNN) to capture the complex relationships and nuances of this language. The language model is then used predict the communications between two IPs and the prediction error is used as a measurement of how typical or atyptical the observed communication are. By learning a model that is specific to each network, yet generalized to typical computer-to-computer traffic within and outside the network, a language model is able to identify sequences of network activity that are outliers with respect to the model. We demonstrate positive unsupervised attack identification performance (AUC 0.84) on the ISCX IDS dataset which contains seven days of network activity with normal traffic and four distinct attack patterns.", "title": "" }, { "docid": "d0d5d9e1eabc1b282c1db08d8da38214", "text": "Climate change is altering the availability of resources and the conditions that are crucial to plant performance. One way plants will respond to these changes is through environmentally induced shifts in phenotype (phenotypic plasticity). Understanding plastic responses is crucial for predicting and managing the effects of climate change on native species as well as crop plants. Here, we provide a toolbox with definitions of key theoretical elements and a synthesis of the current understanding of the molecular and genetic mechanisms underlying plasticity relevant to climate change. By bringing ecological, evolutionary, physiological and molecular perspectives together, we hope to provide clear directives for future research and stimulate cross-disciplinary dialogue on the relevance of phenotypic plasticity under climate change.", "title": "" }, { "docid": "1d2f72587e694aa8d6435e176e87d4cb", "text": "It is well known that the performance of context-based image processing systems can be improved by allowing the processor (e.g., an encoder or a denoiser) a delay of several samples before making a processing decision. Often, however, for such systems, traditional delayed-decision algorithms can become computationally prohibitive due to the growth in the size of the space of possible solutions. In this paper, we propose a reduced-complexity, one-pass, delayed-decision algorithm that systematically reduces the size of the search space, while also preserving its structure. In particular, we apply the proposed algorithm to two examples of adaptive context-based image processing systems, an image coding system that employs a context-based entropy coder, and a spatially adaptive image-denoising system. For these two types of widely used systems, we show that the proposed delayed-decision search algorithm outperforms instantaneous-decision algorithms with only a small increase in complexity. We also show that the performance of the proposed algorithm is better than that of other, higher complexity, delayed-decision algorithms.", "title": "" }, { "docid": "6b19893324e4012a622c0250436e1ab3", "text": "Nowadays, email is one of the fastest ways to conduct communications through sending out information and attachments from one to another. Individuals and organizations are all benefit the convenience from email usage, but at the same time they may also suffer the unexpected user experience of receiving spam email all the time. Spammers flood the email servers and send out mass quantity of unsolicited email to the end users. From a business perspective, email users have to spend time on deleting received spam email which definitely leads to the productivity decrease and cause potential loss for organizations. Thus, how to detect the email spam effectively and efficiently with high accuracy becomes a significant study. In this study, data mining will be utilized to process machine learning by using different classifiers for training and testing and filters for data preprocessing and feature selection. It aims to seek out the optimal hybrid model with higher accuracy or base on other metric’s evaluation. The experiment results show accuracy improvement in email spam detection by using hybrid techniques compared to the single classifiers used in this research. The optimal hybrid model provides 93.00% of accuracy and 7.80% false positive rate for email spam detection.", "title": "" }, { "docid": "5fa2dfc9cbf6568d5282601781e14b58", "text": "Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős–Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible. Artificial neural networks are artificial intelligence computing methods which are inspired by biological neural networks. Here the authors propose a method to design neural networks as sparse scale-free networks, which leads to a reduction in computational time required for training and inference.", "title": "" }, { "docid": "cd8eeaeb81423fcb1c383f2b60e928df", "text": "Detecting and representing changes to data is important for active databases, data warehousing, view maintenance, and version and configuration management. Most previous work in change management has dealt with flat-file and relational data; we focus on hierarchically structured data. Since in many cases changes must be computed from old and new versions of the data, we define the hierarchical change detection problem as the problem of finding a \"minimum-cost edit script\" that transforms one data tree to another, and we present efficient algorithms for computing such an edit script. Our algorithms make use of some key domain characteristics to achieve substantially better performance than previous, general-purpose algorithms. We study the performance of our algorithms both analytically and empirically, and we describe the application of our techniques to hierarchically structured documents.", "title": "" }, { "docid": "2f30301143dc626a3013eb24629bfb45", "text": "A vast array of devices, ranging from industrial robots to self-driven cars or smartphones, require increasingly sophisticated processing of real-world input data (image, voice, radio, ...). Interestingly, hardware neural network accelerators are emerging again as attractive candidate architectures for such tasks. The neural network algorithms considered come from two, largely separate, domains: machine-learning and neuroscience. These neural networks have very different characteristics, so it is unclear which approach should be favored for hardware implementation. Yet, few studies compare them from a hardware perspective. We implement both types of networks down to the layout, and we compare the relative merit of each approach in terms of energy, speed, area cost, accuracy and functionality.\n Within the limit of our study (current SNN and machine-learning NN algorithms, current best effort at hardware implementation efforts, and workloads used in this study), our analysis helps dispel the notion that hardware neural network accelerators inspired from neuroscience, such as SNN+STDP, are currently a competitive alternative to hardware neural networks accelerators inspired from machine-learning, such as MLP+BP: not only in terms of accuracy, but also in terms of hardware cost for realistic implementations, which is less expected. However, we also outline that SNN+STDP carry potential for reduced hardware cost compared to machine-learning networks at very large scales, if accuracy issues can be controlled (or for applications where they are less important). We also identify the key sources of inaccuracy of SNN+STDP which are less related to the loss of information due to spike coding than to the nature of the STDP learning algorithm. Finally, we outline that for the category of applications which require permanent online learning and moderate accuracy, SNN+STDP hardware accelerators could be a very cost-efficient solution.", "title": "" }, { "docid": "c27ba892408391234da524ffab0e7418", "text": "Sunlight and skylight are rarely rendered correctly in computer graphics. A major reason for this is high computational expense. Another is that precise atmospheric data is rarely available. We present an inexpensive analytic model that approximates full spectrum daylight for various atmospheric conditions. These conditions are parameterized using terms that users can either measure or estimate. We also present an inexpensive analytic model that approximates the effects of atmosphere (aerial perspective). These models are fielded in a number of conditions and intermediate results verified against standard literature from atmospheric science. Our goal is to achieve as much accuracy as possible without sacrificing usability.", "title": "" }, { "docid": "c35fa79bd405ec0fb6689d395929c055", "text": "This study examines the potential profit of bull flag technical trading rules using a template matching technique based on pattern recognition for the Nasdaq Composite Index (NASDAQ) and Taiwan Weighted Index (TWI). To minimize measurement error due to data snooping, this study performed a series of experiments to test the effectiveness of the proposed method. The empirical results indicated that all of the technical trading rules correctly predict the direction of changes in the NASDAQ and TWI. This finding may provide investors with important information on asset allocation. Moreover, better bull flag template price fit is associated with higher average return. The empirical results demonstrated that the average return of trading rules conditioned on bull flag significantly better than buying every day for the study period, especially for TWI. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5b021c0223ee25535508eb1d6f63ff55", "text": "A 32-KB standard CMOS antifuse one-time programmable (OTP) ROM embedded in a 16-bit microcontroller as its program memory is designed and implemented in 0.18-mum standard CMOS technology. The proposed 32-KB OTP ROM cell array consists of 4.2 mum2 three-transistor (3T) OTP cells where each cell utilizes a thin gate-oxide antifuse, a high-voltage blocking transistor, and an access transistor, which are all compatible with standard CMOS process. In order for high density implementation, the size of the 3T cell has been reduced by 80% in comparison to previous work. The fabricated total chip size, including 32-KB OTP ROM, which can be programmed via external I 2C master device such as universal I2C serial EEPROM programmer, 16-bit microcontroller with 16-KB program SRAM and 8-KB data SRAM, peripheral circuits to interface other system building blocks, and bonding pads, is 9.9 mm2. This paper describes the cell, design, and implementation of high-density CMOS OTP ROM, and shows its promising possibilities in embedded applications", "title": "" }, { "docid": "60306e39a7b281d35e8a492aed726d82", "text": "The aim of this study was to assess the efficiency of four anesthetic agents, tricaine methanesulfonate (MS-222), clove oil, 7 ketamine, and tobacco extract on juvenile rainbow trout. Also, changes of blood indices were evaluated at optimum doses of four anesthetic agents. Basal effective concentrations determined were 40 mg L−1 (induction, 111 ± 16 s and recovery time, 246 ± 36 s) for clove oil, 150 mg L−1 (induction, 287 ± 59 and recovery time, 358 ± 75 s) for MS-222, 1 mg L−1 (induction, 178 ± 38 and recovery time, 264 ± 57 s) for ketamine, and 30 mg L−1 (induction, 134 ± 22 and recovery time, 285 ± 42 s) for tobacco. According to our results, significant changes in hematological parameters including white blood cells (WBCs), red blood cells (RBCs), hematocrit (Ht), and hemoglobin (Hb) were found between four anesthetics agents. Also, significant differences were observed in some plasma parameters including cortical, glucose, and lactate between experimental treatments. Induction and recovery times for juvenile Oncorhynchus mykiss anesthetized with anesthetic agents were dose-dependent.", "title": "" }, { "docid": "2550502036aac5cf144cb8a0bc2d525b", "text": "Significant achievements have been made on the development of next-generation filtration and separation membranes using graphene materials, as graphene-based membranes can afford numerous novel mass-transport properties that are not possible in state-of-art commercial membranes, making them promising in areas such as membrane separation, water desalination, proton conductors, energy storage and conversion, etc. The latest developments on understanding mass transport through graphene-based membranes, including perfect graphene lattice, nanoporous graphene and graphene oxide membranes are reviewed here in relation to their potential applications. A summary and outlook is further provided on the opportunities and challenges in this arising field. The aspects discussed may enable researchers to better understand the mass-transport mechanism and to optimize the synthesis of graphene-based membranes toward large-scale production for a wide range of applications.", "title": "" }, { "docid": "48d778934127343947b494fe51f56a33", "text": "In this paper, we present a simple method for animating natural phenomena such as erosion, sedimentation, and acidic corrosion. We discretize the appropriate physical or chemical equations using finite differences, and we use the results to modify the shape of a solid body. We remove mass from an object by treating its surface as a level set and advecting it inward, and we deposit the chemical and physical byproducts into simulated fluid. Similarly, our technique deposits sediment onto a surface by advecting the level set outward. Our idea can be used for off-line high quality animations as well as interactive applications such as games, and we demonstrate both in this paper.", "title": "" }, { "docid": "7a5167ffb79f35e75359c979295c22ee", "text": "Precise forecast of the electrical load plays a highly significant role in the electricity industry and market. It provides economic operations and effective future plans for the utilities and power system operators. Due to the intermittent and uncertain characteristic of the electrical load, many research studies have been directed to nonlinear prediction methods. In this paper, a hybrid prediction algorithm comprised of Support Vector Regression (SVR) and Modified Firefly Algorithm (MFA) is proposed to provide the short term electrical load forecast. The SVR models utilize the nonlinear mapping feature to deal with nonlinear regressions. However, such models suffer from a methodical algorithm for obtaining the appropriate model parameters. Therefore, in the proposed method the MFA is employed to obtain the SVR parameters accurately and effectively. In order to evaluate the efficiency of the proposed methodology, it is applied to the electrical load demand in Fars, Iran. The obtained results are compared with those obtained from the ARMA model, ANN, SVR-GA, SVR-HBMO, SVR-PSO and SVR-FA. The experimental results affirm that the proposed algorithm outperforms other techniques. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "91e2dadb338fbe97b009efe9e8f60446", "text": "An efficient smoke detection algorithm on color video sequences obtained from a stationary camera is proposed. Our algorithm considers dynamic and static features of smoke and is composed of basic steps: preprocessing; slowly moving areas and pixels segmentation in a current input frame based on adaptive background subtraction; merge slowly moving areas with pixels into blobs; classification of the blobs obtained before. We use adaptive background subtraction at a stage of moving detection. Moving blobs classification is based on optical flow calculation, Weber contrast analysis and takes into account primary direction of smoke propagation. Real video surveillance sequences were used for smoke detection with utilization our algorithm. A set of experimental results is presented in the paper.", "title": "" }, { "docid": "c5f9f3beff52655f72d2d5870df6fa60", "text": "The current move to Cloud Computing raises the need for verifiable delegation of computations, where a weak client delegates his computation to a powerful server, while maintaining the ability to verify that the result is correct. Although there are prior solutions to this problem, none of them is yet both general and practical for real-world use. We demonstrate a relatively efficient and general solution where the client delegates the computation to several servers, and is guaranteed to determine the correct answer as long as even a single server is honest. We show: A protocol for any efficiently computable function, with logarithmically many rounds, based on any collision-resistant hash family. The protocol is set in terms of Turing Machines but can be adapted to other computation models. An adaptation of the protocol for the X86 computation model and a prototype implementation, called Quin, for Windows executables. We describe the architecture of Quin and experiment with several parameters on live clouds. We show that the protocol is practical, can work with nowadays clouds, and is efficient both for the servers and for the client.", "title": "" }, { "docid": "17d0da8dd05d5cfb79a5f4de4449fcdd", "text": "PUBLISHING Thousands of scientists start year without journal access p.13 2017 SNEAK PEEK What the new year holds for science p.14 ECOLOGY What is causing the deaths of so many shorebirds? p.16 PHYSICS Quantum computers ready to leap out of the lab The race is on to turn scientific curiosities into working machines. A front runner in the pursuit of quantum computing uses single ions trapped in a vacuum. Q uantum computing has long seemed like one of those technologies that are 20 years away, and always will be. But 2017 could be the year that the field sheds its research-only image. Computing giants Google and Microsoft recently hired a host of leading lights, and have set challenging goals for this year. Their ambition reflects a broader transition taking place at start-ups and academic research labs alike: to move from pure science towards engineering. \" People are really building things, \" says Christopher Monroe, a physicist at the University of Maryland in College Park who co-founded the start-up IonQ in 2015. \" I've never seen anything like that. It's no longer just research. \" Google started working on a form of quantum computing that harnesses super-conductivity in 2014. It hopes this year, or shortly after, to perform a computation that is beyond even the most powerful 'classical' supercomputers — an elusive milestone known as quantum supremacy. Its rival, Microsoft, is betting on an intriguing but unproven concept, topological quantum computing, and hopes to perform a first demonstration of the technology. The quantum-computing start-up scene is also heating up. Monroe plans to begin hiring in earnest this year. Physicist Robert Schoelkopf at Yale University in New Haven, Connecticut, who co-founded the start-up Quantum Circuits, and former IBM applied physicist Chad Rigetti, who set up Rigetti in", "title": "" }, { "docid": "4cb2c365abfbb29830557654f015daa2", "text": "The excellent electrical, optical and mechanical properties of graphene have driven the search to find methods for its large-scale production, but established procedures (such as mechanical exfoliation or chemical vapour deposition) are not ideal for the manufacture of processable graphene sheets. An alternative method is the reduction of graphene oxide, a material that shares the same atomically thin structural framework as graphene, but bears oxygen-containing functional groups. Here we use molecular dynamics simulations to study the atomistic structure of progressively reduced graphene oxide. The chemical changes of oxygen-containing functional groups on the annealing of graphene oxide are elucidated and the simulations reveal the formation of highly stable carbonyl and ether groups that hinder its complete reduction to graphene. The calculations are supported by infrared and X-ray photoelectron spectroscopy measurements. Finally, more effective reduction treatments to improve the reduction of graphene oxide are proposed.", "title": "" } ]
scidocsrr
629c7ba37ff27dfbd3c5867f2e7e0e61
MUTE: Majority under-sampling technique
[ { "docid": "a0b862a758c659b62da2114143bf7687", "text": "The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of over-sampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called SafeLevel-SMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and Borderline-SMOTE.", "title": "" }, { "docid": "5a3b8a2ec8df71956c10b2eb10eabb99", "text": "During a project examining the use of machine learning techniques for oil spill detection, we encountered several essential questions that we believe deserve the attention of the research community. We use our particular case study to illustrate such issues as problem formulation, selection of evaluation measures, and data preparation. We relate these issues to properties of the oil spill application, such as its imbalanced class distribution, that are shown to be common to many applications. Our solutions to these issues are implemented in the Canadian Environmental Hazards Detection System (CEHDS), which is about to undergo field testing.", "title": "" } ]
[ { "docid": "2171c57b911161d805ffc08fbe02f92a", "text": "The past decade has witnessed a growing interest in vehicular networking and its vast array of potential applications. Increased wireless accessibility of the Internet from vehicles has triggered the emergence of vehicular safety applications, locationspecific applications, and multimedia applications. Recently, Professor Olariu and his coworkers have promoted the vision of Vehicular Clouds (VCs), a non-trivial extension, along several dimensions, of conventional Cloud Computing. In a VC, the under-utilized vehicular resources including computing power, storage and Internet connectivity can be shared between drivers or rented out over the Internet to various customers, very much as conventional cloud resources are. The goal of this chapter is to introduce and review the challenges and opportunities offered by what promises to be the Next Paradigm Shift:From Vehicular Networks to Vehicular Clouds. Specifically, the chapter introduces VCs and discusses some of their distinguishing characteristics and a number of fundamental research challenges. To illustrate the huge array of possible applications of the powerful VC concept, a number of possible application scenarios are presented and discussed. As the adoption and success of the vehicular cloud concept is inextricably related to security and privacy issues, a number of security and privacy issues specific to vehicular clouds are discussed as well. Additionally, data aggregation and empirical results are presented. Mobile Ad Hoc Networking: Cutting Edge Directions, Second Edition. Edited by Stefano Basagni, Marco Conti, Silvia Giordano, and Ivan Stojmenovic. © 2013 by The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.", "title": "" }, { "docid": "3cd383e547b01040261dc1290d87b02e", "text": "Abnormal condition in a power system generally leads to a fall in system frequency, and it leads to system blackout in an extreme condition. This paper presents a technique to develop an auto load shedding and islanding scheme for a power system to prevent blackout and to stabilize the system under any abnormal condition. The technique proposes the sequence and conditions of the applications of different load shedding schemes and islanding strategies. It is developed based on the international current practices. It is applied to the Bangladesh Power System (BPS), and an auto load-shedding and islanding scheme is developed. The effectiveness of the developed scheme is investigated simulating different abnormal conditions in BPS.", "title": "" }, { "docid": "75ce2ccca2afcae56101e141a42ac2a2", "text": "Hip disarticulation is an amputation through the hip joint capsule, removing the entire lower extremity, with closure of the remaining musculature over the exposed acetabulum. Tumors of the distal and proximal femur were treated by total femur resection; a hip disarticulation sometimes is performance for massive trauma with crush injuries to the lower extremity. This article discusses the design a system for rehabilitation of a patient with bilateral hip disarticulations. The prosthetics designed allowed the patient to do natural gait suspended between parallel articulate crutches with the body weight support between the crutches. The care of this patient was a challenge due to bilateral amputations at such a high level and the special needs of a patient mobility. Keywords— Amputation, prosthesis, mobility,", "title": "" }, { "docid": "b5b91947716e3594e3ddbb300ea80d36", "text": "In this paper, a novel drive method, which is different from the traditional motor drive techniques, for high-speed brushless DC (BLDC) motor is proposed and verified by a series of experiments. It is well known that the BLDC motor can be driven by either pulse-width modulation (PWM) techniques with a constant dc-link voltage or pulse-amplitude modulation (PAM) techniques with an adjustable dc-link voltage. However, to our best knowledge, there is rare study providing a proper drive method for a high-speed BLDC motor with a large power over a wide speed range. Therefore, the detailed theoretical analysis comparison of the PWM control and the PAM control for high-speed BLDC motor is first given. Then, a conclusion that the PAM control is superior to the PWM control at high speed is obtained because of decreasing the commutation delay and high-frequency harmonic wave. Meanwhile, a new high-speed BLDC motor drive method based on the hybrid approach combining PWM and PAM is proposed. Finally, the feasibility and effectiveness of the performance analysis comparison and the new drive method are verified by several experiments.", "title": "" }, { "docid": "55d0ce47c7864e42412b4532869e66d6", "text": "Deep learning has become very popular for tasks such as predictive modeling and pattern recognition in handling big data. Deep learning is a powerful machine learning method that extracts lower level features and feeds them forward for the next layer to identify higher level features that improve performance. However, deep neural networks have drawbacks, which include many hyper-parameters and infinite architectures, opaqueness into results, and relatively slower convergence on smaller datasets. While traditional machine learning algorithms can address these drawbacks, they are not typically capable of the performance levels achieved by deep neural networks. To improve performance, ensemble methods are used to combine multiple base learners. Super learning is an ensemble that finds the optimal combination of diverse learning algorithms. This paper proposes deep super learning as an approach which achieves log loss and accuracy results competitive to deep neural networks while employing traditional machine learning algorithms in a hierarchical structure. The deep super learner is flexible, adaptable, and easy to train with good performance across different tasks using identical hyper-parameter values. Using traditional machine learning requires fewer hyper-parameters, allows transparency into results, and has relatively fast convergence on smaller datasets. Experimental results show that the deep super learner has superior performance compared to the individual base learners, single-layer ensembles, and in some cases deep neural networks. Performance of the deep super learner may further be improved with task-specific tuning.", "title": "" }, { "docid": "5bac6135af1c6014352d6ce5e91ec8d3", "text": "Acute necrotizing fasciitis (NF) in children is a dangerous illness characterized by progressive necrosis of the skin and subcutaneous tissue. The present study summarizes our recent experience with the treatment of pediatric patients with severe NF. Between 2000 and 2009, eight children suffering from NF were admitted to our department. Four of the children received an active treatment strategy including continuous renal replacement therapy (CRRT), radical debridement, and broad-spectrum antibiotics. Another four children presented at a late stage of illness, and did not complete treatment. Clinical data for these two patient groups were retrospectively analyzed. The four patients that completed CRRT, radical debridement, and a course of broad-spectrum antibiotics were cured without any significant residual morbidity. The other four infants died shortly after admission. Early diagnosis, timely debridement, and aggressive use of broad-spectrum antibiotics are key factors for achieving a satisfactory outcome for cases of acute NF. Early intervention with CRRT to prevent septic shock may also improve patient outcome.", "title": "" }, { "docid": "b771737351b984881e0fce7f9bb030e8", "text": "BACKGROUND\nConsidering the high prevalence of dementia, it would be of great value to develop effective tools to improve cognitive function. We examined the effects of a human-type communication robot on cognitive function in elderly women living alone.\n\n\nMATERIAL/METHODS\nIn this study, 34 healthy elderly female volunteers living alone were randomized to living with either a communication robot or a control robot at home for 8 weeks. The shape, voice, and motion features of the communication robot resemble those of a 3-year-old boy, while the control robot was not designed to talk or nod. Before living with the robot and 4 and 8 weeks after living with the robot, experiments were conducted to evaluate a variety of cognitive functions as well as saliva cortisol, sleep, and subjective fatigue, motivation, and healing.\n\n\nRESULTS\nThe Mini-Mental State Examination score, judgement, and verbal memory function were improved after living with the communication robot; those functions were not altered with the control robot. In addition, the saliva cortisol level was decreased, nocturnal sleeping hours tended to increase, and difficulty in maintaining sleep tended to decrease with the communication robot, although alterations were not shown with the control. The proportions of the participants in whom effects on attenuation of fatigue, enhancement of motivation, and healing could be recognized were higher in the communication robot group relative to the control group.\n\n\nCONCLUSIONS\nThis study demonstrates that living with a human-type communication robot may be effective for improving cognitive functions in elderly women living alone.", "title": "" }, { "docid": "11e220528f9d4b6a51cdb63268934586", "text": "The function of DIRCM (directed infrared countermeasures) jamming is to cause the missile to miss its intended target by disturbing the seeker tracking process. The DIRCM jamming uses the pulsing flashes of infrared (IR) energy and its frequency, phase and intensity have the influence on the missile guidance system. In this paper, we analyze the DIRCM jamming effect of the spin-scan reticle seeker. The simulation results show that the jamming effect is greatly influenced by frequency, phase and intensity of the jammer signal.", "title": "" }, { "docid": "62b24fad8ab9d1c426ed3ff7c3c5fb49", "text": "In the present paper we have reported a wavelet based time-frequency multiresolution analysis of an ECG signal. The ECG (electrocardiogram), which records hearts electrical activity, is able to provide with useful information about the type of Cardiac disorders suffered by the patient depending upon the deviations from normal ECG signal pattern. We have plotted the coefficients of continuous wavelet transform using Morlet wavelet. We used different ECG signal available at MIT-BIH database and performed a comparative study. We demonstrated that the coefficient at a particular scale represents the presence of QRS signal very efficiently irrespective of the type or intensity of noise, presence of unusually high amplitude of peaks other than QRS peaks and Base line drift errors. We believe that the current studies can enlighten the path towards development of very lucid and time efficient algorithms for identifying and representing the QRS complexes that can be done with normal computers and processors. KeywordsECG signal, Continuous Wavelet Transform, Morlet Wavelet, Scalogram, QRS Detector.", "title": "" }, { "docid": "d15dc60ef2fb1e6096a3aba372698fd9", "text": "One of the most interesting applications of Industry 4.0 paradigm is enhanced process control. Traditionally, process control solutions based on Cyber-Physical Systems (CPS) consider a top-down view where processes are represented as executable high-level descriptions. However, most times industrial processes follow a bottom-up model where processes are executed by low-level devices which are hard-programmed with the process to be executed. Thus, high-level components only may supervise the process execution as devices cannot modify dynamically their behavior. Therefore, in this paper we propose a vertical CPS-based solution (including a reference and a functional architecture) adequate to perform enhanced process control in Industry 4.0 scenarios with a bottom-up view. The proposed solution employs an event-driven service-based architecture where control is performed by means of finite state machines. Furthermore, an experimental validation is provided proving that in more than 97% of cases the proposed solution allows a stable and effective control.", "title": "" }, { "docid": "abe729a351eb9dbc1688abe5133b28c2", "text": "C. H. Tian B. K. Ray J. Lee R. Cao W. Ding This paper presents a framework for the modeling and analysis of business model designs involving a network of interconnected business entities. The framework includes an ecosystem-modeling component, a simulation component, and a serviceanalysis component, and integrates methods from value network modeling, game theory analysis, and multiagent systems. A role-based paradigm is introduced for characterizing ecosystem entities in order to easily allow for the evolution of the ecosystem and duplicated functionality for entities. We show how the framework can be used to provide insight into value distribution among the entities and evaluation of business model performance under different scenarios. The methods are illustrated using a case study of a retail business-to-business service ecosystem.", "title": "" }, { "docid": "2afbb4e8963b9e6953fd6f7f8c595c06", "text": "Large-scale linguistically annotated corpora have played a crucial role in advancing the state of the art of key natural language technologies such as syntactic, semantic and discourse analyzers, and they serve as training data as well as evaluation benchmarks. Up till now, however, most of the evaluation has been done on monolithic corpora such as the Penn Treebank, the Proposition Bank. As a result, it is still unclear how the state-of-the-art analyzers perform in general on data from a variety of genres or domains. The completion of the OntoNotes corpus, a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information, makes it possible to perform such an evaluation. This paper presents an analysis of the performance of publicly available, state-of-the-art tools on all layers and languages in the OntoNotes v5.0 corpus. This should set the benchmark for future development of various NLP components in syntax and semantics, and possibly encourage research towards an integrated system that makes use of the various layers jointly to improve overall performance.", "title": "" }, { "docid": "00614d23a028fe88c3f33db7ace25a58", "text": "Cloud Computing and The Internet of Things are the two hot points in the Internet field. The application of the two new technologies is in hot discussion and research, but quite less on the field of agriculture and forestry. Thus, in this paper, we analyze the study and application of Cloud Computing and The Internet of Things on agriculture and forestry. Then we put forward an idea that making a combination of the two techniques and analyze the feasibility, applications and future prospect of the combination.", "title": "" }, { "docid": "cfc9a6e0a99ba5ba5668037650e95d1d", "text": "This paper tries to estimate post-legalization production costs for indoor and outdoor cannabis cultivation as well as parallel estimates for processing costs. Commercial production for general use is not legal anywhere. Hence, this is an exercise in inference based on imperfect analogs supplemented by spare and unsatisfactory data of uncertain provenance. While some parameters are well grounded, many come from the gray literature and/or conversations with others making similar estimates, marijuana growers, and farmers of conventional goods. Hence, this exercise should be taken with more than a few grains of salt. Nevertheless, to the extent that the results are even approximately correct, they suggest that wholesale prices after legalization could be dramatically lower than they are today, quite possibly a full order of magnitude lower than are current prices.", "title": "" }, { "docid": "9beff0659cc5aad37097d212caaeef40", "text": "Mobile cloud computing (MC2) is emerging as a promising computing paradigm which helps alleviate the conflict between resource-constrained mobile devices and resource-consuming mobile applications through computation offloading. In this paper, we analyze the computation offloading problem in cloudlet-based mobile cloud computing. Different from most of the previous works which are either from the perspective of a single user or under the setting of a single wireless access point (AP), we research the computation offloading strategy of multiple users via multiple wireless APs. With the widespread deployment of WLAN, offloading via multiple wireless APs will obtain extensive application. Taking energy consumption and delay (including computing and transmission delay) into account, we present a game-theoretic analysis of the computation offloading problem while mimicking the selfish nature of the individuals. In the case of homogeneous mobile users, conditions of Nash equilibrium are analyzed, and an algorithm that admits a Nash equilibrium is proposed. For heterogeneous users, we prove the existence of Nash equilibrium by introducing the definition of exact potential game and design a distributed computation offloading algorithm to help mobile users choose proper offloading strategies. Numerical extensive simulations have been conducted and results demonstrate that the proposed algorithm can achieve desired system performance.", "title": "" }, { "docid": "6d3e5f798cee29e0039965d450b36cf3", "text": "Many mammals are born in a very immature state and develop their rich repertoire of behavioral and cognitive functions postnatally. This development goes in parallel with changes in the anatomical and functional organization of cortical structures which are involved in most complex activities. The emerging spatiotemporal activity patterns in multi-neuronal cortical networks may indeed form a direct neuronal correlate of systemic functions like perception, sensorimotor integration, decision making or memory formation. During recent years, several studies--mostly in rodents--have shed light on the ontogenesis of such highly organized patterns of network activity. While each local network has its own peculiar properties, some general rules can be derived. We therefore review and compare data from the developing hippocampus, neocortex and--as an intermediate region--entorhinal cortex. All cortices seem to follow a characteristic sequence starting with uncorrelated activity in uncoupled single neurons where transient activity seems to have mostly trophic effects. In rodents, before and shortly after birth, cortical networks develop weakly coordinated multineuronal discharges which have been termed synchronous plateau assemblies (SPAs). While these patterns rely mostly on electrical coupling by gap junctions, the subsequent increase in number and maturation of chemical synapses leads to the generation of large-scale coherent discharges. These patterns have been termed giant depolarizing potentials (GDPs) for predominantly GABA-induced events or early network oscillations (ENOs) for mostly glutamatergic bursts, respectively. During the third to fourth postnatal week, cortical areas reach their final activity patterns with distinct network oscillations and highly specific neuronal discharge sequences which support adult behavior. While some of the mechanisms underlying maturation of network activity have been elucidated much work remains to be done in order to fully understand the rules governing transition from immature to mature patterns of network activity.", "title": "" }, { "docid": "d5fcc6e6046ca293fc9b5afcc236325f", "text": "Purpose – The purpose of this study is to conduct a meta-analysis of prior scientometric research of the knowledge management (KM) field. Design/methodology/approach – A total of 108 scientometric studies of the KM discipline were subjected to meta-analysis techniques. Findings – The overall volume of scientometric KM works has been growing, reaching up to ten publications per year by 2012, but their key findings are somewhat inconsistent. Most scientometric KM research is published in non-KM-centric journals. The KM discipline has deep historical roots. It suffers from a high degree of over-differentiation and is represented by dissimilar research streams. The top six most productive countries for KM research are the USA, the UK, Canada, Germany, Australia, and Spain. KM exhibits attributes of a healthy academic domain with no apparent anomalies and is progressing towards academic maturity. Practical implications – Scientometric KM researchers should use advanced empirical methods, become aware of prior scientometric research, rely on multiple databases, develop a KM keyword classification scheme, publish their research in KM-centric outlets, focus on rigorous research of the forums for KM publications, improve their cooperation, conduct a comprehensive study of individual and institutional productivity, and investigate interdisciplinary collaboration. KM-centric journals should encourage authors to employ under-represented empirical methods and conduct meta-analysis studies and should discourage conceptual publications, especially the development of new frameworks. To improve the impact of KM research on the state of practice, knowledge dissemination channels should be developed. Originality/value – This is the first documented attempt to conduct a meta-analysis of scientometric research of the KM discipline.", "title": "" }, { "docid": "260e574e9108e05b98df7e4ed489e5fc", "text": "Why are we not living yet with robots? If robots are not common everyday objects, it is maybe because we have looked for robotic applications without considering with sufficient attention what could be the experience of interacting with a robot. This article introduces the idea of a value profile, a notion intended to capture the general evolution of our experience with different kinds of objects. After discussing value profiles of commonly used objects, it offers a rapid outline of the challenging issues that must be investigated concerning immediate, short-term and long-term experience with robots. Beyond science-fiction classical archetypes, the picture emerging from this analysis is the one of versatile everyday robots, autonomously developing in interaction with humans, communicating with one another, changing shape and body in order to be adapted to their various context of use. To become everyday objects, robots will not necessary have to be useful, but they will have to be at the origins of radically new forms of experiences.", "title": "" }, { "docid": "5d753a475a18f250b2e3143cf80a6e33", "text": "In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models. We present trainable deep neural networks for transforming images to adversarial perturbations. Our proposed models can produce image-agnostic and image-dependent perturbations for targeted and non-targeted attacks. We also demonstrate that similar architectures can achieve impressive results in fooling both classification and semantic segmentation models, obviating the need for hand-crafting attack methods for each task. Using extensive experiments on challenging high-resolution datasets such as ImageNet and Cityscapes, we show that our perturbations achieve high fooling rates with small perturbation norms. Moreover, our attacks are considerably faster than current iterative methods at inference time.", "title": "" }, { "docid": "70c9fe96604c617a2e94fd721add3fb5", "text": "Multi-task learning aims to boost the performance of multiple prediction tasks by appropriately sharing relevant information among them. However, it always suffers from the negative transfer problem. And due to the diverse learning difficulties and convergence rates of different tasks, jointly optimizing multiple tasks is very challenging. To solve these problems, we present a weighted multi-task deep convolutional neural network for person attribute analysis. A novel validation loss trend algorithm is, for the first time proposed to dynamically and adaptively update the weight for learning each task in the training process. Extensive experiments on CelebA, Market-1501 attribute and Duke attribute datasets clearly show that state-of-the-art performance is obtained; and this validates the effectiveness of our proposed framework.", "title": "" } ]
scidocsrr
0abfeed109fa5c4a24dab2defa2f48b3
Pixelwise Instance Segmentation with a Dynamically Instantiated Network
[ { "docid": "c15f36dccebee50056381c41e6ddb2dc", "text": "Instance-level object segmentation is an important yet under-explored task. Most of state-of-the-art methods rely on region proposal methods to extract candidate segments and then utilize object classification to produce final results. Nonetheless, generating reliable region proposals itself is a quite challenging and unsolved task. In this work, we propose a Proposal-Free Network (PFN) to address the instance-level object segmentation problem, which outputs the numbers of instances of different categories and the pixel-level information on i) the coordinates of the instance bounding box each pixel belongs to, and ii) the confidences of different categories for each pixel, based on pixel-to-pixel deep convolutional neural network. All the outputs together, by using any off-the-shelf clustering method for simple post-processing, can naturally generate the ultimate instance-level object segmentation results. The whole PFN can be easily trained without the requirement of a proposal generation stage. Extensive evaluations on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate the effectiveness of the proposed PFN solution without relying on any proposal generation methods.", "title": "" }, { "docid": "c3827ca529fa0ffd60cc192a08b87d92", "text": "We present the first fully convolutional end-to-end solution for instance-aware semantic segmentation task. It inherits all the merits of FCNs for semantic segmentation [29] and instance mask proposal [5]. It performs instance mask prediction and classification jointly. The underlying convolutional representation is fully shared between the two sub-tasks, as well as between all regions of interest. The network architecture is highly integrated and efficient. It achieves state-of-the-art performance in both accuracy and efficiency. It wins the COCO 2016 segmentation competition by a large margin. Code would be released at https://github.com/daijifeng001/TA-FCN.", "title": "" }, { "docid": "2e99600c993f9f68e38668cf62557b94", "text": "Existing methods for pixel-wise labelling tasks generally disregard the underlying structure of labellings, often leading to predictions that are visually implausible. While incorporating structure into the model should improve prediction quality, doing so is challenging - manually specifying the form of structural constraints may be impractical and inference often becomes intractable even if structural constraints are given. We sidestep this problem by reducing structured prediction to a sequence of unconstrained prediction problems and demonstrate that this approach is capable of automatically discovering priors on shape, contiguity of region predictions and smoothness of region contours from data without any a priori specification. On the instance segmentation task, this method outperforms the state-of-the-art, achieving a mean APr of 63:6% at 50% overlap and 43:3% at 70% overlap.", "title": "" }, { "docid": "3380a9a220e553d9f7358739e3f28264", "text": "We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively.", "title": "" }, { "docid": "1a17f5bebb430ade117f449a2837b10b", "text": "Traditional Scene Understanding problems such as Object Detection and Semantic Segmentation have made breakthroughs in recent years due to the adoption of deep learning. However, the former task is not able to localise objects at a pixel level, and the latter task has no notion of different instances of objects of the same class. We focus on the task of Instance Segmentation which recognises and localises objects down to a pixel level. Our model is based on a deep neural network trained for semantic segmentation. This network incorporates a Conditional Random Field with end-to-end trainable higher order potentials based on object detector outputs. This allows us to reason about instances from an initial, category-level semantic segmentation. Our simple method effectively leverages the great progress recently made in semantic segmentation and object detection. The accurate instance-level segmentations that our network produces is reflected by the considerable improvements obtained over previous work at high APr IoU thresholds.", "title": "" }, { "docid": "5a2dcebfadb2e52d1f506b5e8e6547d8", "text": "The ability to predict and therefore to anticipate the future is an important attribute of intelligence. It is also of utmost importance in real-time systems, e.g. in robotics or autonomous driving, which depend on visual scene understanding for decision making. While prediction of the raw RGB pixel values in future video frames has been studied in previous work, here we introduce the novel task of predicting semantic segmentations of future frames. Given a sequence of video frames, our goal is to predict segmentation maps of not yet observed video frames that lie up to a second or further in the future. We develop an autoregressive convolutional neural network that learns to iteratively generate multiple frames. Our results on the Cityscapes dataset show that directly predicting future segmentations is substantially better than predicting and then segmenting future RGB frames. Prediction results up to half a second in the future are visually convincing and are much more accurate than those of a baseline based on warping semantic segmentations using optical flow.", "title": "" } ]
[ { "docid": "7d61a2bedb128d77a81c6b4958a17c30", "text": "Levering data on social media, such as Twitter and Facebook, requires information retrieval algorithms to become able to relate very short text fragments to each other. Traditional text similarity methods such as tf-idf cosine-similarity, based on word overlap, mostly fail to produce good results in this case, since word overlap is little or non-existent. Recently, distributed word representations, or word embeddings, have been shown to successfully allow words to match on the semantic level. In order to pair short text fragments -- as a concatenation of separate words -- an adequate distributed sentence representation is needed, in existing literature often obtained by naively combining the individual word representations. We therefore investigated several text representations as a combination of word embeddings in the context of semantic pair matching. This paper investigates the effectiveness of several such naive techniques, as well as traditional tf-idf similarity, for fragments of different lengths. Our main contribution is a first step towards a hybrid method that combines the strength of dense distributed representations -- as opposed to sparse term matching -- with the strength of tf-idf based methods to automatically reduce the impact of less informative terms. Our new approach outperforms the existing techniques in a toy experimental set-up, leading to the conclusion that the combination of word embeddings and tf-idf information might lead to a better model for semantic content within very short text fragments.", "title": "" }, { "docid": "c12c9fa98f672ec1bfde404d5bf54a35", "text": "Speech recognition has become an important feature in smartphones in recent years. Different from traditional automatic speech recognition, the speech recognition on smartphones can take advantage of personalized language models to model the linguistic patterns and wording habits of a particular smartphone owner better. Owing to the popularity of social networks in recent years, personal texts and messages are no longer inaccessible. However, data sparseness is still an unsolved problem. In this paper, we propose a three-step adaptation approach to personalize recurrent neural network language models (RNNLMs). We believe that its capability to model word histories as distributed representations of arbitrary length can help mitigate the data sparseness problem. Furthermore, we also propose additional user-oriented features to empower the RNNLMs with stronger capabilities for personalization. The experiments on a Facebook dataset showed that the proposed method not only drastically reduced the model perplexity in preliminary experiments, but also moderately reduced the word error rate in n-best rescoring tests.", "title": "" }, { "docid": "8434059364439763de89ff14356615d2", "text": "OBJECTIVE\nRepeated hospitalizations and arrests or incarcerations diminish the ability of individuals with serious mental illnesses to pursue recovery. Community mental health systems need new models to address recidivism as well as service fragmentation, lack of engagement by local stakeholders, and poor communication between mental health providers and the police. This study examined the initial effects on institutional recidivism and measures of recovery among persons enrolled in Opening Doors to Recovery, an intensive, team-based community support program for persons with mental illness and a history of inpatient psychiatric recidivism. A randomized controlled trial of the model is underway.\n\n\nMETHODS\nThe number of hospitalizations, days hospitalized, and arrests (all from state administrative sources) in the year before enrollment and during the first 12 months of enrollment in the program were compared. Longitudinal trajectories of recovery-using three self-report and five clinician-rated measures-were examined. Analyses accounted for baseline symptom severity and intensity of involvement in the program.\n\n\nRESULTS\nOne hundred participants were enrolled, and 72 were included in the analyses. Hospitalizations decreased, from 1.9±1.6 to .6±.9 (p<.001), as did hospital days, from 27.6±36.4 to 14.9±41.3 (p<.001), although number of arrests (which are rare events) did not. Significant linear trends were observed for recovery measures, and trajectories of improvement were apparent across the entire follow-up period.\n\n\nCONCLUSIONS\nOpening Doors to Recovery holds promise as a new service approach for reducing hospital recidivism and promoting recovery in community mental health systems and is deserving of further controlled testing.", "title": "" }, { "docid": "ce33bd2f243e2e8d6bd4202720d82ed8", "text": "BACKGROUND AND OBJECTIVES\nTo assess the prevalence, etiology, diagnosis of primary and secondary lactose intolerance (LI), including age of onset, among children 1-5 years of age. Suspected/perceived lactose intolerance can lead to dietary restrictions which may increase risk of future health issues.\n\n\nMETHODS AND STUDY DESIGN\nMEDLINE, CAB Abstract, and Embase were searched for articles published from January 1995-June 2015 related to lactose intolerance in young children. Authors independently screened titles/abstracts, full text articles, for eligibility against a priori inclusion/exclusion criteria. Two reviewers extracted data and assessed quality of the included studies.\n\n\nRESULTS\nThe search identified 579 articles; 20 studies, the majority of which were crosssectional, were included in the qualitative synthesis. Few studies reported prevalence of primary LI in children aged 1-5 years; those that did reported a range between 0-17.9%. Prevalence of secondary LI was 0-19%. Hydrogen breath test was the most common method used to diagnose LI. None of the included studies reported age of onset of primary LI.\n\n\nCONCLUSIONS\nThere is limited recent evidence on the prevalence of LI in this age group. The low number of studies and wide range of methodologies used to diagnose LI means that comparison and interpretation, particularly of geographical trends, is compromised. Current understanding appears to rely on data generated in the 1960/70s, with varied qualities of evidence. New, high quality studies are necessary to understand the true prevalence of LI. This review is registered with the International Prospective Register for Systematic Reviews (PROSPERO).", "title": "" }, { "docid": "dcf4de4629be22628f5b226a1dcee856", "text": "Paper prototyping offers unique affordances for interface design. However, due to its spontaneous nature and the limitations of paper, it is difficult to distill and communicate a paper prototype design and its user test findings to a wide audience. To address these issues, we created FrameWire, a computer vision-based system that automatically extracts interaction flows from the video recording of paper prototype user tests. Based on the extracted logic, FrameWire offers two distinct benefits for designers: a structural view of the video recording that allows a designer or a stakeholder to easily distill and understand the design concept and user interaction behaviors, and automatic generation of interactive HTML-based prototypes that can be easily tested with a larger group of users as well as \"walked through\" by other stakeholders. The extraction is achieved by automatically aggregating video frame sequences into an interaction flow graph based on frame similarities and a designer-guided clustering process. The results of evaluating FrameWire with realistic paper prototyping tests show that our extraction approach is feasible and FrameWire is a promising tool for enhancing existing prototyping practice.", "title": "" }, { "docid": "9b430645f7b0da19b2c55d43985259d8", "text": "Research on human spatial memory and navigational ability has recently shown the strong influence of reference systems in spatial memory on the ways spatial information is accessed in navigation and other spatially oriented tasks. One of the main findings can be characterized as a large cognitive cost, both in terms of speed and accuracy that occurs whenever the reference system used to encode spatial information in memory is not aligned with the reference system required by a particular task. In this paper, the role of aligned and misaligned reference systems is discussed in the context of the built environment and modern architecture. The role of architectural design on the perception and mental representation of space by humans is investigated. The navigability and usability of built space is systematically analysed in the light of cognitive theories of spatial and navigational abilities of humans. It is concluded that a building’s navigability and related wayfinding issues can benefit from architectural design that takes into account basic results of spatial cognition research. 1 Wayfinding and Architecture Life takes place in space and humans, like other organisms, have developed adaptive strategies to find their way around their environment. Tasks such as identifying a place or direction, retracing one’s path, or navigating a large-scale space, are essential elements to mobile organisms. Most of these spatial abilities have evolved in natural environments over a very long time, using properties present in nature as cues for spatial orientation and wayfinding. With the rise of complex social structure and culture, humans began to modify their natural environment to better fit their needs. The emergence of primitive dwellings mainly provided shelter, but at the same time allowed builders to create environments whose spatial structure “regulated” the chaotic natural environment. They did this by using basic measurements and geometric relations, such as straight lines, right angles, etc., as the basic elements of design (Le Corbusier, 1931, p. 69ff.) In modern society, most of our lives take place in similar regulated, human-made spatial environments, with paths, tracks, streets, and hallways as the main arteries of human locomotion. Architecture and landscape architecture embody the human effort to structure space in meaningful and useful ways. Architectural design of space has multiple functions. Architecture is designed to satisfy the different representational, functional, aesthetic, and emotional needs of organizations and the people who live or work in these structures. In this chapter, emphasis lies on a specific functional aspect of architectural design: human wayfinding. Many approaches to improving architecture focus on functional issues, like improved ecological design, the creation of improved workplaces, better climate control, lighting conditions, or social meeting areas. Similarly, when focusing on the mobility of humans, the ease of wayfinding within a building can be seen as an essential function of a building’s design (Arthur & Passini, 1992; Passini, 1984). When focusing on wayfinding issues in buildings, cities, and landscapes, the designed spatial environment can be seen as an important tool in achieving a particular goal, e.g., reaching a destination or finding an exit in case of emergency. This view, if taken to a literal extreme, is summarized by Le Corbusier’s (1931) notion of the building as a “machine,” mirroring in architecture the engineering ideals of efficiency and functionality found in airplanes and cars. In the narrow sense of wayfinding, a building thus can be considered of good design if it allows easy and error-free navigation. This view is also adopted by Passini (1984), who states that “although the architecture and the spatial configuration of a building generate the wayfinding problems people have to solve, they are also a wayfinding support system in that they contain the information necessary to solve the problem” (p. 110). Like other problems of engineering, the wayfinding problem in architecture should have one or more solutions that can be evaluated. This view of architecture can be contrasted with the alternative view of architecture as “built philosophy”. According to this latter view, architecture, like art, expresses ideas and cultural progress by shaping the spatial structure of the world – a view which gives consideration to the users as part of the philosophical approach but not necessarily from a usability perspective. Viewing wayfinding within the built environment as a “man-machine-interaction” problem makes clear that good architectural design with respect to navigability needs to take two factors into account. First, the human user comes equipped with particular sensory, perceptual, motoric, and cognitive abilities. Knowledge of these abilities and the limitations of an average user or special user populations thus is a prerequisite for good design. Second, structural, functional, financial, and other design considerations restrict the degrees of freedom architects have in designing usable spaces. In the following sections, we first focus on basic research on human spatial cognition. Even though not all of it is directly applicable to architectural design and wayfinding, it lays the foundation for more specific analyses in part 3 and 4. In part 3, the emphasis is on a specific research question that recently has attracted some attention: the role of environmental structure (e.g., building and street layout) for the selection of a spatial reference frame. In part 4, implications for architectural design are discussed by means of two real-world examples. 2 The human user in wayfinding 2.1 Navigational strategies Finding one’s way in the environment, reaching a destination, or remembering the location of relevant objects are some of the elementary tasks of human activity. Fortunately, human navigators are well equipped with an array of flexible navigational strategies, which usually enable them to master their spatial environment (Allen, 1999). In addition, human navigation can rely on tools that extend human sensory and mnemonic abilities. Most spatial or navigational strategies are so common that they do not occur to us when we perform them. Walking down a hallway we hardly realize that the optical and acoustical flows give us rich information about where we are headed and whether we will collide with other objects (Gibson, 1979). Our perception of other objects already includes physical and social models on how they will move and where they will be once we reach the point where paths might cross. Following a path can consist of following a particular visual texture (e.g., asphalt) or feeling a handrail in the dark by touch. At places where multiple continuing paths are possible, we might have learned to associate the scene with a particular action (e.g., turn left; Schölkopf & Mallot, 1995), or we might try to approximate a heading direction by choosing the path that most closely resembles this direction. When in doubt about our path we might ask another person or consult a map. As is evident from this brief (and not exhaustive) description, navigational strategies and activities are rich in diversity and adaptability (for an overview see Golledge, 1999; Werner, Krieg-Brückner, & Herrmann, 2000), some of which are aided by architectural design and signage (see Arthur & Passini, 1992; Passini, 1984). Despite the large number of different navigational strategies, people still experience problems finding their way or even feel lost momentarily. This feeling of being lost might reflect the lack of a key component of human wayfinding: knowledge about where one is located in an environment – with respect to one’s goal, one’s starting location, or with respect to the global environment one is in. As Lynch put it, “the terror of being lost comes from the necessity that a mobile organism be oriented in its surroundings” (1960, p. 125.) Some wayfinding strategies, like vector navigation, rely heavily on this information. Other strategies, e.g. piloting or path-following, which are based on purely local information can benefit from even vague locational knowledge as a redundant source of information to validate or question navigational decisions (see Werner et al., 2000, for examples.) Proficient signage in buildings, on the other hand, relies on a different strategy. It relieves a user from keeping track of his or her position in space by indicating the correct navigational choice whenever the choice becomes relevant. Keeping track of one’s position during navigation can be done quite easily if access to global landmarks, reference directions, or coordinates is possible. Unfortunately, the built environment often does not allow for simple navigational strategies based on these types of information. Instead, spatial information has to be integrated across multiple places, paths, turns, and extended periods of time (see Poucet, 1993, for an interesting model of how this can be achieved). In the next section we will describe an essential ingredient of this integration – the mental representation of spatial information in memory. 2.2 Alignment effects in spatial memory When observing tourists in an unfamiliar environment, one often notices people frantically turning maps to align the noticeable landmarks depicted in the map with the visible landmarks as seen from the viewpoint of the tourist. This type of behavior indicates a well-established cognitive principle (Levine, Jankovic, & Palij, 1982). Observers more easily comprehend and use information depicted in “You-are-here” (YAH) maps if the up-down direction of the map coincides with the front-back direction of the observer. In this situation, the natural preference of directional mapping of top to front and bottom to back is used, and left and right in the map stay left and right in the depicted world. While th", "title": "" }, { "docid": "31ed7279f2b3192cd0fc3f7aa65fd5cf", "text": "Ontology matching is one of the most important work to achieve the goal of the semantic web. To fulfill this task, element-level matching is an indispensable step to obtain the fundamental alignment. In element-level matching process, previous work generally utilizes WordNet to compute the semantic similarities among elements, but WordNet is limited by its coverage. In this paper, we introduce word embeddings to the field of ontology matching. We testified the superiority of word embeddings and presented a hybrid method to incorporate word embeddings into the computation of the semantic similarities among elements. We performed the experiments on the OAEI benchmark, conference track and real-world ontologies. The experimental results show that in elementlevel matching, word embeddings could achieve better performance than previous methods.", "title": "" }, { "docid": "50ab05d133dceaacf71b28b6a4b547bc", "text": "The ability to measure human hand motions and interaction forces is critical to improving our understanding of manual gesturing and grasp mechanics. This knowledge serves as a basis for developing better tools for human skill training and rehabilitation, exploring more effective methods of designing and controlling robotic hands, and creating more sophisticated human-computer interaction devices which use complex hand motions as control inputs. This paper presents work on the design, fabrication, and experimental validation of a soft sensor-embedded glove which measures both hand motion and contact pressures during human gesturing and manipulation tasks. We design an array of liquid-metal embedded elastomer sensors to measure up to hundreds of Newtons of interaction forces across the human palm during manipulation tasks and to measure skin strains across phalangeal and carpal joints for joint motion tracking. The elastomeric sensors provide the mechanical compliance necessary to accommodate anatomical variations and permit a normal range of hand motion. We explore methods of assembling this soft sensor glove from modular, individually fabricated pressure and strain sensors and develop design guidelines for their mechanical integration. Experimental validation of a soft finger glove prototype demonstrates the sensitivity range of the designed sensors and the mechanical robustness of the proposed assembly method, and provides a basis for the production of a complete soft sensor glove from inexpensive modular sensor components.", "title": "" }, { "docid": "cd78dd2ef989917c01a325a460c07223", "text": "This paper proposes a multi-joint-gripper that achieves envelope grasping for unknown shape objects. Proposed mechanism is based on a chain of Differential Gear Systems (DGS) controlled by only one motor. It also has a Variable Stiffness Mechanism (VSM) that controls joint stiffness to relieve interfering effects suffered from grasping environment and achieve a dexterous grasping. The experiments elucidate that the developed gripper achieves envelop grasping; the posture of the gripper automatically fits the shape of the object with no sensory feedback. And they also show that the VSM effectively works to relieve external interfering. This paper shows the mechanism and experimental results of the second test machine that was developed inheriting the idea of DGS used in the first test machine but has a completely altered VSM.", "title": "" }, { "docid": "db98c5b7606770fe75478cdd5d3d4fbf", "text": "Objective: Procalcitonin (PCT) and C-reactive protein (CRP) plasma concentrations were measured after different types of surgery to analyze a possible postoperative induction of procalcitonin (PCT), which might interfere with the diagnosis of bacterial infection or sepsis by PCT. Design: PCT and CRP plasma levels as well as clinical symptoms of infection were prospectively registered preoperatively and 5 days postoperatively. Setting: University hospital, in-patient postoperative care. Patients: Hundred thirty patients were followed up; 117 patients with a normal postoperative course were statistically analyzed. Interventions: None. Measurements and results: PCT concentrations were moderately increased above the normal range in 32 % of patients after minor and aseptic surgery, in 59 % after cardiac and thoracic surgery, and in 95 % of patients after surgery of the intestine. In patients with an abnormal postoperative course, PCT was increased in 12 of 13 patients. CRP was increased in almost all patients. Conclusions: Postoperative induction of PCT largely depends on the type of surgery. Intestinal surgery and major operations more often increase PCT, whereas it is normal in the majority of patients after minor and primarily aseptic surgery. PCT can thus be used postoperatively for diagnostic means only when the range of PCT concentrations during the normal course of a certain type of surgery is considered and concentrations are followed up.", "title": "" }, { "docid": "6c720d68e8cea8f4c1fc17006af464cd", "text": "In this paper, a high-range 60-GHz monostatic transceiver system suitable for frequency-modulated continuous-wave (FMCW) applications is presented. The RF integrated circuit is fabricated using a 0.13-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> SiGe BiCMOS technology with <inline-formula> <tex-math notation=\"LaTeX\">$f_{T}$ </tex-math></inline-formula>/<inline-formula> <tex-math notation=\"LaTeX\">$f_{\\max }$ </tex-math></inline-formula> of 250/340 GHz and occupies a very compact area of <inline-formula> <tex-math notation=\"LaTeX\">$1.42 \\times 0.72$ </tex-math></inline-formula> mm<sup>2</sup>. All of the internal blocks are designed fully differential with an in-phase/quadrature receiver (RX) conversion gain of 14.8 dB and −18.2 dBm of input-referred 1-dB compression point and a transmitter (TX) with 6.4 dBm of output power. The 60-GHz voltage-controlled oscillator is of a push-push type Colpitts oscillator integrated into a frequency divider with an output frequency between 910 MHz and 1 GHz with the help of 3-bit frequency tuning mechanism for external phase-locked loop operations. Between the TX and RX channels, a tunable coupler is placed to guarantee a high isolation between channels which could withstand any fabrication failures and provide a single differential antenna output. On the TX side, two power detectors are placed in order to monitor the transmitted and reflected powers on the TX channel by passing through a branch-line coupler for built-in-self-test purposes. The total current consumption of this transceiver is 156 mA at 3.3 V of single supply. Considering the successful real-time radar measurements, which the radar is able to detect the objects in more than 90-m range, it proves the suitability of this monostatic chip in high-range FMCW radar systems.", "title": "" }, { "docid": "14507cdb603716a76fa101b0a28a6a25", "text": "This paper examines the potential impact of artificial intelligence (A.I.) on economic growth. We model A.I. as the latest form of automation, a broader process dating back more than 200 years. Electricity, internal combustion engines, and semiconductors facilitated automation in the last century, but A.I. now seems poised to automate many tasks once thought to be out of reach, from driving cars to making medical recommendations and beyond. How will this affect economic growth and the division of income between labor and capital? What about the potential emergence of “singularities” and “superintelligence,” concepts that animate many discussions in the machine intelligence community? How will the linkages between A.I. and growth be mediated by firm-level considerations, including organization and market structure? The goal throughout is to refine a set of critical questions about A.I. and economic growth and to contribute to shaping an agenda for the field. One theme that emerges is based on Baumol’s “cost disease” insight: growth may be constrained not by what we are good at but rather by what is essential and yet hard to improve. ∗We are grateful to Ajay Agrawal, Mohammad Ahmadpoor, Adrien Auclert, Sebastian Di Tella, Patrick Francois, Joshua Gans, Avi Goldfarb, Pete Klenow, Hannes Mahlmberg, Pascual Restrepo, Chris Tonetti, Michael Webb, and participants at the NBER Conference on Artificial Intelligence for helpful discussion and comments. 2 P. AGHION, B. JONES, AND C. JONES", "title": "" }, { "docid": "73e0ef5aa2eed22eb03d93d0ccfe5aed", "text": "This article offers a formal account of curiosity and insight in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how people attain insight and understanding using just a handful of observations, which are solicited through curious behavior. We use simulations of abstract rule learning and approximate Bayesian inference to show that minimizing (expected) variational free energy leads to active sampling of novel contingencies. This epistemic behavior closes explanatory gaps in generative models of the world, thereby reducing uncertainty and satisfying curiosity. We then move from epistemic learning to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries (i.e., invariances or rules) in their generative models. The ensuing Bayesian model reduction evinces mechanisms associated with sleep and has all the hallmarks of “aha” moments. This formulation moves toward a computational account of consciousness in the pre-Cartesian sense of sharable knowledge (i.e., con: “together”; scire: “to know”).", "title": "" }, { "docid": "8a1e2eddd9107412bd0d34bfde73322d", "text": "The aim of this meta-analysis was to compare social desirability scores between paper and computer surveys. Subgroup analyses were conducted with Internet connectivity, level of anonymity, individual or group test setting, possibility of skipping items, possibility of backtracking previous items, inclusion of questions of sensitive nature, and social desirability scale type as moderators. Subgroup analyses were also conducted for study characteristics, namely the randomisation of participants, sample type (students vs. other), and study design (betweenvs. within-subjects). Social desirability scores between the two administration modes were compared for 51 studies that included 62 independent samples and 16,700 unique participants. The overall effect of administration mode was close to zero (Cohen’s d = 0.00 for fixed-effect and d = −0.01 for random-effects meta-analysis). The majority of the effect sizes in the subgroup analyses were not significantly different from zero either. The effect sizes were close to zero for both Internet and offline surveys. In conclusion, the totality of evidence indicates that there is no difference in social desirability between paper-and-pencil surveys and computer surveys. Publication year and sample size were positively correlated (ρ = .64), which suggests that certain of the large effects that have been found in the past may have been due to sampling error.", "title": "" }, { "docid": "40fe24e70fd1be847e9f89b82ff75b28", "text": "Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multitask learning. Specifically, we propose a new sharing unit: \"cross-stitch\" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.", "title": "" }, { "docid": "33dfea6420b9946b420b0963206df8fa", "text": "The keyword ‘Internet of Things’ (IoT) is used to refer to many concepts related to the extension of the Internet and the Web into the physical daily life. This is supported by means of the widespread of distributed devices with embedded identification, sensing and/or actuation capabilities. IoT envisions a future in which both digital and physical entities can be linked, by means of appropriate information and communication technologies, to enable a big range of applications and services. In this article, I will highlight some details of these technologies, applications and research challenges for IoT.", "title": "" }, { "docid": "0c1e7ff806fd648dbd7adec1ec639413", "text": "We recently proposed the Rate Control Protocol (RCP) as way to minimize download times (or flow-completion times). Simulations suggest that if RCP were widely deployed, downloads would frequently finish ten times faster than with TCP. This is because RCP involves explicit feedback from the routers along the path, allowing a sender to pick a fast starting rate, and adapt quickly to network conditions. RCP is particularly appealing because it can be shown to be stable under broad operating conditions, and its performance is independent of the flow-size distribution and the RTT. Although it requires changes to the routers, the changes are small: The routers keep no per-flow state or per-flow queues, and the per-packet processing is minimal. However, the bar is high for a new congestion control mechanism - introducing a new scheme requires enormous change, and the argument needs to be compelling. And so, to enable some scientific and repeatable experiments with RCP, we have built and tested an open and public implementation of RCP; we have made available both the end- host software, and the router hardware. In this paper we describe our end-host implementation of RCP in Linux, and our router implementation in Verilog (on the NetFPGA platform). We hope that others will be able to use these implementations to experiment with RCP and further our understanding of congestion control.", "title": "" } ]
scidocsrr
19c6b0ccb2a59a79378ae9309e0e5b39
Strengthening Customer Loyalty Through Intimacy and Passion : Roles of Customer – Firm Affection and Customer – Staff Relationships in Services
[ { "docid": "e89cf17cf4d336468f75173767af63a5", "text": "This article explores the possibility that romantic love is an attachment process--a biosocial process by which affectional bonds are formed between adult lovers, just as affectional bonds are formed earlier in life between human infants and their parents. Key components of attachment theory, developed by Bowlby, Ainsworth, and others to explain the development of affectional bonds in infancy, were translated into terms appropriate to adult romantic love. The translation centered on the three major styles of attachment in infancy--secure, avoidant, and anxious/ambivalent--and on the notion that continuity of relationship style is due in part to mental models (Bowlby's \"inner working models\") of self and social life. These models, and hence a person's attachment style, are seen as determined in part by childhood relationships with parents. Two questionnaire studies indicated that relative prevalence of the three attachment styles is roughly the same in adulthood as in infancy, the three kinds of adults differ predictably in the way they experience romantic love, and attachment style is related in theoretically meaningful ways to mental models of self and social relationships and to relationship experiences with parents. Implications for theories of romantic love are discussed, as are measurement problems and other issues related to future tests of the attachment perspective.", "title": "" }, { "docid": "03c4e98d0945c9fcd5f8ded1129ce0ff", "text": "On the basis of the proposition that love promotes commitment, the authors predicted that love would motivate approach, have a distinct signal, and correlate with commitment-enhancing processes when relationships are threatened. The authors studied romantic partners and adolescent opposite-sex friends during interactions that elicited love and threatened the bond. As expected, the experience of love correlated with approach-related states (desire, sympathy). Providing evidence for a nonverbal display of love, four affiliation cues (head nods, Duchenne smiles, gesticulation, forward leans) correlated with self-reports and partner estimates of love. Finally, the experience and display of love correlated with commitment-enhancing processes (e.g., constructive conflict resolution, perceived trust) when the relationship was threatened. Discussion focused on love, positive emotion, and relationships.", "title": "" } ]
[ { "docid": "6c12755ba2580d5d9b794b9a33c0304a", "text": "A fundamental part of conducting cross-disciplinary web science research is having useful, high-quality datasets that provide value to studies across disciplines. In this paper, we introduce a large, hand-coded corpus of online harassment data. A team of researchers collaboratively developed a codebook using grounded theory and labeled 35,000 tweets. Our resulting dataset has roughly 15% positive harassment examples and 85% negative examples. This data is useful for training machine learning models, identifying textual and linguistic features of online harassment, and for studying the nature of harassing comments and the culture of trolling.", "title": "" }, { "docid": "e15c4c119ec5969e0dd48ebbdc5a753f", "text": "We present a method for unsupervised open-domain relation discovery. In contrast to previous (mostly generative and agglomerative clustering) approaches, our model relies on rich contextual features and makes minimal independence assumptions. The model is composed of two parts: a feature-rich relation extractor, which predicts a semantic relation between two entities, and a factorization model, which reconstructs arguments (i.e., the entities) relying on the predicted relation. The two components are estimated jointly so as to minimize errors in recovering arguments. We study factorization models inspired by previous work in relation factorization and selectional preference modeling. Our models substantially outperform the generative and agglomerative-clustering counterparts and achieve state-of-the-art performance.", "title": "" }, { "docid": "13856095dbdb5a1e2ddc5ae70695fcce", "text": "This paper presents a new multitask learning framework that learns a shared representation among the tasks, incorporating both task and feature clusters. The jointlyinduced clusters yield a shared latent subspace where task relationships are learned more effectively and more generally than in state-of-the-art multitask learning methods. The proposed general framework enables the derivation of more specific or restricted stateof-the-art multitask methods. The paper also proposes a highly-scalable multitask learning algorithm, based on the new framework, using conjugate gradient descent and generalized Sylvester equations. Experimental results on synthetic and benchmark datasets show that the proposed method systematically outperforms several state-of-the-art multitask learning methods.", "title": "" }, { "docid": "22fa045c2bc3d2ea85bce65618a3276a", "text": "Interest in analyzing electricity consumption data of private households has grown steadily in the last years. Several authors have for instance focused on identifying groups of households with similar consumption patterns or on providing feedback to consumers in order to motivate them to reduce their energy consumption. In this paper, we propose to use electricity consumption data to classify households according to pre-defined \"properties\" of interest. Examples of these properties include the floor area of a household or the number of its occupants. Energy providers can leverage knowledge of such household properties to shape premium services (e.g., energy consulting) for their customers. We present a classification system - called CLASS - that takes as input electricity consumption data of a private household and provides as output the estimated values of its properties. We describe the design and implementation of CLASS and evaluate its performance. To this end, we rely on electricity consumption traces from 3,488 private households, collected at a 30-minute granularity and for a period of more than 1.5 years. Our evaluation shows that CLASS - relying on electricity consumption data only - can estimate the majority of the considered household properties with more than 70% accuracy. For some of the properties, CLASS's accuracy exceeds 80%. Furthermore, we show that for selected properties the use of a priori information can increase classification accuracy by up to 11%.", "title": "" }, { "docid": "1091a0c344fe06d06b8eadce4c2b4085", "text": "Achievement behavior is denned as behavior directed at developing or demonstrating high rather than low ability. It is shown that ability can be conceived in two ways. First, ability can be judged high or low with reference to the individual's own past performance or knowledge. In this context, gains in mastery indicate competence. Second, ability can be judged as capacity relative to that of others. In this context, a gain in mastery alone does not indicate high ability. To demonstrate high capacity, one must achieve more with equal effort or use less effort than do others for an equal performance. The conditions under which these different conceptions of ability function as individuals' goals and the nature of subjective experience in each case are specified. Different predictions of task choice and performance are derived and tested for each case.", "title": "" }, { "docid": "cab2b7e4a0dda985d3cff9330c3e0cba", "text": "Understanding the basic concepts of chemistry is very important for the students of secondary school level and university level. The Computer Assisted Teaching and Learning (CATL) methods are marked by the usage of computers in teaching and learning processes. Usage of WORD, EXCEL, POWERPOINT, ACCESS, PHOTOSHOP etc., as well as the use of specialized packages such as CHEMDRAW, SCIFINDER etc., can be worth mentioning. The role of internet in feeding the thirst of students is comparably far better than the classroom teaching. By the use of CATL methods, students can acquire high quality of mental models.", "title": "" }, { "docid": "e50c921d664f970daa8050bad282e066", "text": "In the complex decision-environments that characterize e-business settings, it is important to permit decision-makers to proactively manage data quality. In this paper we propose a decision-support framework that permits decision-makers to gauge quality both in an objective (context-independent) and in a context-dependent manner. The framework is based on the information product approach and uses the Information Product Map (IPMAP). We illustrate its application in evaluating data quality using completeness—a data quality dimension that is acknowledged as important. A decision-support tool (IPView) for managing data quality that incorporates the proposed framework is also described. D 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0f1f6570abf200de786221f28210ed78", "text": "This paper presents a novel idea for reducing the data storage problems in the self-driving cars. Self-driving cars is a technology that is observed by the modern word with most curiosity. However the vulnerability with the car is the growing data and the approach for handling such huge amount of data growth. This paper proposes a cloud based self-driving car which can optimize the data storage problems in such cars. The idea is to not store any data in the car, rather download everything from the cloud as per the need of the travel. This allows the car to not keep a huge amount of data and rely on a cloud infrastructure for the drive.", "title": "" }, { "docid": "7cd655bbea3b088618a196382b33ed1e", "text": "Story generation is a well-recognized task in computational creativity research, but one that can be difficult to evaluate empirically. It is often inefficient and costly to rely solely on human feedback for judging the quality of generated stories. We address this by examining the use of linguistic analyses for automated evaluation, using metrics from existing work on predicting writing quality. We apply these metrics specifically to story continuation, where a model is given the beginning of a story and generates the next sentence, which is useful for systems that interactively support authors’ creativity in writing. We compare sentences generated by different existing models to human-authored ones according to the analyses. The results show some meaningful differences between the models, suggesting that this evaluation approach may be advantageous for future research.", "title": "" }, { "docid": "42d79800699b372489ad6c95ac91b21c", "text": "Being able to reason in an environment with a large number of discrete actions is essential to bringing reinforcement learning to a larger class of problems. Recommender systems, industrial plants and language models are only some of the many real-world tasks involving large numbers of discrete actions for which current methods can be difficult or even impossible to apply. An ability to generalize over the set of actions as well as sub-linear complexity relative to the size of the set are both necessary to handle such tasks. Current approaches are not able to provide both of these, which motivates the work in this paper. Our proposed approach leverages prior information about the actions to embed them in a continuous space upon which it can generalize. Additionally, approximate nearest-neighbor methods allow for logarithmic-time lookup complexity relative to the number of actions, which is necessary for time-wise tractable training. This combined approach allows reinforcement learning methods to be applied to large-scale learning problems previously intractable with current methods. We demonstrate our algorithm’s abilities on a series of tasks having up to one million actions.", "title": "" }, { "docid": "3378b1b16a066d9ce89400dc413910c8", "text": "It is now widely acknowledged that analyzing the intrinsic geometrical features of the underlying image is essential in many applications including image processing. In order to achieve this, several directional image representation schemes have been proposed. In this paper, we develop the discrete shearlet transform (DST) which provides efficient multiscale directional representation and show that the implementation of the transform is built in the discrete framework based on a multiresolution analysis (MRA). We assess the performance of the DST in image denoising and approximation applications. In image approximations, our approximation scheme using the DST outperforms the discrete wavelet transform (DWT) while the computational cost of our scheme is comparable to the DWT. Also, in image denoising, the DST compares favorably with other existing transforms in the literature.", "title": "" }, { "docid": "7a4f42c389dbca2f3c13469204a22edd", "text": "This article attempts to capture and summarize the known technical information and recommendations for analysis of furan test results. It will also provide the technical basis for continued gathering and evaluation of furan data for liquid power transformers, and provide a recommended structure for collecting that data.", "title": "" }, { "docid": "c14c37eb74a994c0799d39ab53abf311", "text": "All learners learn best when they are motivated; so do adults. Hence, the way to ensure success of students in higher education is first to know what motivates and sustains them in the learning process. Based on a study of 203 university students, this paper presents eight top most motivating factors for adult learners in higher education. These include quality of instruction; quality of curriculum; relevance and pragmatism; interactive classrooms and effective management practices; progressive assessment and timely feedback; self-directedness; conducive learning environment; and effective academic advising practices. The study concludes that these eight factors are critical to eliciting or enhancing the will power in students in higher education toward successful learning. The implications for practice and further research are also discussed.", "title": "" }, { "docid": "e901f23dab17abc3896868929ae71854", "text": "Recent large-scale hierarchical classification tasks typically have tens of thousands of classes on which the most widely used approach to multiclass classification--one-versus-rest--becomes intractable due to computational complexity. The top-down methods are usually adopted instead, but they are less accurate because of the so-called error-propagation problem in their classifying phase. To address this problem, this paper proposes a meta-top-down method that employs metaclassification to enhance the normal top-down classifying procedure. The proposed method is first analyzed theoretically on complexity and accuracy, and then applied to five real-world large-scale data sets. The experimental results indicate that the classification accuracy is largely improved, while the increased time costs are smaller than most of the existing approaches.", "title": "" }, { "docid": "b24fc322e0fec700ec0e647c31cfd74d", "text": "Organometal trihalide perovskite solar cells offer the promise of a low-cost easily manufacturable solar technology, compatible with large-scale low-temperature solution processing. Within 1 year of development, solar-to-electric power-conversion efficiencies have risen to over 15%, and further imminent improvements are expected. Here we show that this technology can be successfully made compatible with electron acceptor and donor materials generally used in organic photovoltaics. We demonstrate that a single thin film of the low-temperature solution-processed organometal trihalide perovskite absorber CH3NH3PbI3-xClx, sandwiched between organic contacts can exhibit devices with power-conversion efficiency of up to 10% on glass substrates and over 6% on flexible polymer substrates. This work represents an important step forward, as it removes most barriers to adoption of the perovskite technology by the organic photovoltaic community, and can thus utilize the extensive existing knowledge of hybrid interfaces for further device improvements and flexible processing platforms.", "title": "" }, { "docid": "3510bcd9d52729766e2abe2111f8be95", "text": "Metaphors are common elements of language that allow us to creatively stretch the limits of word meaning. However, metaphors vary in their degree of novelty, which determines whether people must create new meanings on-line or retrieve previously known metaphorical meanings from memory. Such variations affect the degree to which general cognitive capacities such as executive control are required for successful comprehension. We investigated whether individual differences in executive control relate to metaphor processing using eye movement measures of reading. Thirty-nine participants read sentences including metaphors or idioms, another form of figurative language that is more likely to rely on meaning retrieval. They also completed the AX-CPT, a domain-general executive control task. In Experiment 1, we examined sentences containing metaphorical or literal uses of verbs, presented with or without prior context. In Experiment 2, we examined sentences containing idioms or literal phrases for the same participants to determine whether the link to executive control was qualitatively similar or different to Experiment 1. When metaphors were low familiar, all people read verbs used as metaphors more slowly than verbs used literally (this difference was smaller for high familiar metaphors). Executive control capacity modulated this pattern in that high executive control readers spent more time reading verbs when a prior context forced a particular interpretation (metaphorical or literal), and they had faster total metaphor reading times when there was a prior context. Interestingly, executive control did not relate to idiom processing for the same readers. Here, all readers had faster total reading times for high familiar idioms than literal phrases. Thus, executive control relates to metaphor but not idiom processing for these readers, and for the particular metaphor and idiom reading manipulations presented.", "title": "" }, { "docid": "f39abb67a6c392369c5618f5c33d93cf", "text": "In our research, we view human behavior as a structured sequence of context-sensitive decisions. We develop a conditional probabilistic model for predicting human decisions given the contextual situation. Our approach employs the principle of maximum entropy within the Markov Decision Process framework. Modeling human behavior is reduced to recovering a context-sensitive utility function that explains demonstrated behavior within the probabilistic model. In this work, we review the development of our probabilistic model (Ziebart et al. 2008a) and the results of its application to modeling the context-sensitive route preferences of drivers (Ziebart et al. 2008b). We additionally expand the approach’s applicability to domains with stochastic dynamics, present preliminary experiments on modeling time-usage, and discuss remaining challenges for applying our approach to other human behavior modeling problems.", "title": "" }, { "docid": "5c58487cb31d71ee8154be2c79b8c19c", "text": "Storytelling is a well known and ancient art form. Fascinating and compelling characters have animated literature around the world from the beginning of the written word. Today, scientific research has laid the foundations for a sound empirical understanding of storytelling as a clear aid to memory, as a means of making sense of the world, as a way to make and strengthen emotional connections, and as way of recognizing and identifying with brands of any type. Whether you are dealing with product brands or company brands, storytelling is essential to successful branding, since your brand is the sum of all your corporate behaviors and communications that inform your customers’ experiences with your product or company.", "title": "" }, { "docid": "fce4b1fcd876094bcec9c6a9659ff5d5", "text": "Organelle biogenesis is concomitant to organelle inheritance during cell division. It is necessary that organelles double their size and divide to give rise to two identical daughter cells. Mitochondrial biogenesis occurs by growth and division of pre-existing organelles and is temporally coordinated with cell cycle events [1]. However, mitochondrial biogenesis is not only produced in association with cell division. It can be produced in response to an oxidative stimulus, to an increase in the energy requirements of the cells, to exercise training, to electrical stimulation, to hormones, during development, in certain mitochondrial diseases, etc. [2]. Mitochondrial biogenesis is therefore defined as the process via which cells increase their individual mitochondrial mass [3]. Recent discoveries have raised attention to mitochondrial biogenesis as a potential target to treat diseases which up to date do not have an efficient cure. Mitochondria, as the major ROS producer and the major antioxidant producer exert a crucial role within the cell mediating processes such as apoptosis, detoxification, Ca2+ buffering, etc. This pivotal role makes mitochondria a potential target to treat a great variety of diseases. Mitochondrial biogenesis can be pharmacologically manipulated. This issue tries to cover a number of approaches to treat several diseases through triggering mitochondrial biogenesis. It contains recent discoveries in this novel field, focusing on advanced mitochondrial therapies to chronic and degenerative diseases, mitochondrial diseases, lifespan extension, mitohormesis, intracellular signaling, new pharmacological targets and natural therapies. It contributes to the field by covering and gathering the scarcely reported pharmacological approaches in the novel and promising field of mitochondrial biogenesis. There are several diseases that have a mitochondrial origin such as chronic progressive external ophthalmoplegia (CPEO) and the Kearns- Sayre syndrome (KSS), myoclonic epilepsy with ragged-red fibers (MERRF), mitochondrial encephalomyopathy, lactic acidosis and strokelike episodes (MELAS), Leber's hereditary optic neuropathy (LHON), the syndrome of neurogenic muscle weakness, ataxia and retinitis pigmentosa (NARP), and Leigh's syndrome. Likewise, other diseases in which mitochondrial dysfunction plays a very important role include neurodegenerative diseases, diabetes or cancer. Generally, in mitochondrial diseases a mutation in the mitochondrial DNA leads to a loss of functionality of the OXPHOS system and thus to a depletion of ATP and overproduction of ROS, which can, in turn, induce further mtDNA mutations. The work by Yu-Ting Wu, Shi-Bei Wu, and Yau-Huei Wei (Department of Biochemistry and Molecular Biology, National Yang-Ming University, Taiwan) [4] focuses on the aforementioned mitochondrial diseases with special attention to the compensatory mechanisms that prompt mitochondria to produce more energy even under mitochondrial defect-conditions. These compensatory mechanisms include the overexpression of antioxidant enzymes, mitochondrial biogenesis and overexpression of respiratory complex subunits, as well as metabolic shift to glycolysis. The pathways observed to be related to mitochondrial biogenesis as a compensatory adaptation to the energetic deficits in mitochondrial diseases are described (PGC- 1, Sirtuins, AMPK). Several pharmacological strategies to trigger these signaling cascades, according to these authors, are the use of bezafibrate to activate the PPAR-PGC-1α axis, the activation of AMPK by resveratrol and the use of Sirt1 agonists such as quercetin or resveratrol. Other strategies currently used include the addition of antioxidant supplements to the diet (dietary supplementation with antioxidants) such as L-carnitine, coenzyme Q10,MitoQ10 and other mitochondria-targeted antioxidants,N-acetylcysteine (NAC), vitamin C, vitamin E vitamin K1, vitamin B, sodium pyruvate or -lipoic acid. As aforementioned, other diseases do not have exclusively a mitochondrial origin but they might have an important mitochondrial component both on their onset and on their development. This is the case of type 2 diabetes or neurodegenerative diseases. Type 2 diabetes is characterized by a peripheral insulin resistance accompanied by an increased secretion of insulin as a compensatory system. Among the explanations about the origin of insulin resistance Mónica Zamora and Josep A. Villena (Department of Experimental and Health Sciences, Universitat Pompeu Fabra / Laboratory of Metabolism and Obesity, Universitat Autònoma de Barcelona, Spain) [5] consider the hypothesis that mitochondrial dysfunction, e.g. impaired (mitochondrial) oxidative capacity of the cell or tissue, is one of the main underlying causes of insulin resistance and type 2 diabetes. Although this hypothesis is not free of controversy due to the uncertainty on the sequence of events during type 2 diabetes onset, e.g. whether mitochondrial dysfunction is the cause or the consequence of insulin resistance, it has been widely observed that improving mitochondrial function also improves insulin sensitivity and prevents type 2 diabetes. Thus restoring oxidative capacity by increasing mitochondrial mass appears as a suitable strategy to treat insulin resistance. The effort made by researchers trying to understand the signaling pathways mediating mitochondrial biogenesis has uncovered new potential pharmacological targets and opens the perspectives for the design of suitable treatments for insulin resistance. In addition some of the current used strategies could be used to treat insulin resistance such as lifestyle interventions (caloric restriction and endurance exercise) and pharmacological interventions (thiazolidinediones and other PPAR agonists, resveratrol and other calorie restriction mimetics, AMPK activators, ERR activators). Mitochondrial biogenesis is of special importance in modern neurochemistry because of the broad spectrum of human diseases arising from defects in mitochondrial ion and ROS homeostasis, energy production and morphology [1]. Parkinson´s Disease (PD) is a very good example of this important mitochondrial component on neurodegenerative diseases. Anuradha Yadav, Swati Agrawal, Shashi Kant Tiwari, and Rajnish K. Chaturvedi (CSIR-Indian Institute of Toxicology Research / Academy of Scientific and Innovative Research, India) [6] remark in their review the role of mitochondrial dysfunction in PD with special focus on the role of oxidative stress and bioenergetic deficits. These alterations may have their origin on pathogenic gene mutations in important genes such as DJ-1, -syn, parkin, PINK1 or LRRK2. These mutations, in turn, may cause defects in mitochondrial dynamics (key events like fission/fusion, biogenesis, trafficking in retrograde and anterograde directions, and mitophagy). This work reviews different strategies to enhance mitochondrial bioenergetics in order to ameliorate the neurodegenerative process, with an emphasis on clinical trials reports that indicate their potential. Among them creatine, Coenzyme Q10 and mitochondrial targeted antioxidants/peptides are reported to have the most remarkable effects in clinical trials. They highlight a dual effect of PGC-1α expression on PD prognosis. Whereas a modest expression of this transcriptional co-activator results in positive effects, a moderate to substantial overexpession may have deleterious consequences. As strategies to induce PGC-1α activation, these authors remark the possibility to activate Sirt1 with resveratrol, to use PPAR agonists such as pioglitazone, rosiglitazone, fenofibrate and bezafibrate. Other strategies include the triggering of Nrf2/antioxidant response element (ARE) pathway by triterpenoids (derivatives of oleanolic acid) or by Bacopa monniera, the enhancement of ATP production by carnitine and -lipoic acid. Mitochondrial dysfunctions are the prime source of neurodegenerative diseases and neurodevelopmental disorders. In the context of neural differentiation, Martine Uittenbogaard and Anne Chiaramello (Department of Anatomy and Regenerative Biology, George Washington University School of Medicine and Health Sciences, USA) [7] thoroughly describe the implication of mitochondrial biogenesis on neuronal differentiation, its timing, its regulation by specific signaling pathways and new potential therapeutic strategies. The maintenance of mitochondrial homeostasis is crucial for neuronal development. A mitochondrial dynamic balance is necessary between mitochondrial fusion, fission and quality control systems and mitochondrial biogenesis. Concerning the signaling pathways leading to mitochondrial biogenesis this review highlights the implication of different regulators such as AMPK, SIRT1, PGC-1α, NRF1, NRF2, Tfam, etc. on the specific case of neuronal development, providing examples of diseases in which these pathways are altered and transgenic mouse models lacking these regulators. A common hallmark of several neurodegenerative diseases (Huntington´s Disease, Alzheimer´s Disease and Parkinson´s Disease) is the impaired function or expression of PGC-1α, the master regulator of mitochondrial biogenesis. Among the promising strategies to ameliorate mitochondrial-based diseases these authors highlight the induction of PGC-1α via activation of PPAR receptors (rosiglitazone, bezafibrate) or modulating its activity by AMPK (AICAR, metformin, resveratrol) or SIRT1 (SRT1720 and several isoflavone-derived compounds). This article also presents a review of the current animal and cellular models useful to study mitochondriogenesis. Although it is known that many neurodegenerative and neurodevelopmental diseases are originated in mitochondria, the regulation of mitochondrial biogenesis has never been extensively studied. (ABSTRACT TRUNCATED)", "title": "" }, { "docid": "60161ef0c46b4477f0cf35356bc3602c", "text": "Differential privacy is a formal mathematical framework for quantifying and managing privacy risks. It provides provable privacy protection against a wide range of potential attacks, including those * Alexandra Wood is a Fellow at the Berkman Klein Center for Internet & Society at Harvard University. Micah Altman is Director of Research at MIT Libraries. Aaron Bembenek is a PhD student in computer science at Harvard University. Mark Bun is a Google Research Fellow at the Simons Institute for the Theory of Computing. Marco Gaboardi is an Assistant Professor in the Computer Science and Engineering department at the State University of New York at Buffalo. James Honaker is a Research Associate at the Center for Research on Computation and Society at the Harvard John A. Paulson School of Engineering and Applied Sciences. Kobbi Nissim is a McDevitt Chair in Computer Science at Georgetown University and an Affiliate Professor at Georgetown University Law Center; work towards this document was completed in part while the Author was visiting the Center for Research on Computation and Society at Harvard University. David R. O’Brien is a Senior Researcher at the Berkman Klein Center for Internet & Society at Harvard University. Thomas Steinke is a Research Staff Member at IBM Research – Almaden. Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics at Harvard University. This Article is the product of a working group of the Privacy Tools for Sharing Research Data project at Harvard University (http://privacytools.seas.harvard.edu). The working group discussions were led by Kobbi Nissim. Alexandra Wood and Kobbi Nissim are the lead Authors of this Article. Working group members Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, James Honaker, Kobbi Nissim, David R. O’Brien, Thomas Steinke, Salil Vadhan, and Alexandra Wood contributed to the conception of the Article and to the writing. The Authors thank John Abowd, Scott Bradner, Cynthia Dwork, Simson Garfinkel, Caper Gooden, Deborah Hurley, Rachel Kalmar, Georgios Kellaris, Daniel Muise, Michel Reymond, and Michael Washington for their many valuable comments on earlier versions of this Article. A preliminary version of this work was presented at the 9th Annual Privacy Law Scholars Conference (PLSC 2017), and the Authors thank the participants for contributing thoughtful feedback. The original manuscript was based upon work supported by the National Science Foundation under Grant No. CNS-1237235, as well as by the Alfred P. Sloan Foundation. The Authors’ subsequent revisions to the manuscript were supported, in part, by the US Census Bureau under cooperative agreement no. CB16ADR0160001. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the Authors and do not necessarily reflect the views of the National Science Foundation, the Alfred P. Sloan Foundation, or the US Census Bureau. 210 VAND. J. ENT. & TECH. L. [Vol. 21:1:209 currently unforeseen. Differential privacy is primarily studied in the context of the collection, analysis, and release of aggregate statistics. These range from simple statistical estimations, such as averages, to machine learning. Tools for differentially private analysis are now in early stages of implementation and use across a variety of academic, industry, and government settings. Interest in the concept is growing among potential users of the tools, as well as within legal and policy communities, as it holds promise as a potential approach to satisfying legal requirements for privacy protection when handling personal information. In particular, differential privacy may be seen as a technical solution for analyzing and sharing data while protecting the privacy of individuals in accordance with existing legal or policy requirements for de-identification or disclosure limitation. This primer seeks to introduce the concept of differential privacy and its privacy implications to non-technical audiences. It provides a simplified and informal, but mathematically accurate, description of differential privacy. Using intuitive illustrations and limited mathematical formalism, it discusses the definition of differential privacy, how differential privacy addresses privacy risks, how differentially private analyses are constructed, and how such analyses can be used in practice. A series of illustrations is used to show how practitioners and policymakers can conceptualize the guarantees provided by differential privacy. These illustrations are also used to explain related concepts, such as composition (the accumulation of risk across multiple analyses), privacy loss parameters, and privacy budgets. This primer aims to provide a foundation that can guide future decisions when analyzing and sharing statistical data about individuals, informing individuals about the privacy protection they will be afforded, and designing policies and regulations for robust privacy protection.", "title": "" } ]
scidocsrr
42a373dad8d6004ba571098d62510a8f
The Internet of Battle Things
[ { "docid": "42c6ec7e27bc1de6beceb24d52b7216c", "text": "Internet of Things (IoT) refers to the expansion of Internet technologies to include wireless sensor networks (WSNs) and smart objects by extensive interfacing of exclusively identifiable, distributed communication devices. Due to the close connection with the physical world, it is an important requirement for IoT technology to be self-secure in terms of a standard information security model components. Autonomic security should be considered as a critical priority and careful provisions must be taken in the design of dynamic techniques, architectures and self-sufficient frameworks for future IoT. Over the years, many researchers have proposed threat mitigation approaches for IoT and WSNs. This survey considers specific approaches requiring minimal human intervention and discusses them in relation to self-security. This survey addresses and brings together a broad range of ideas linked together by IoT, autonomy and security. More particularly, this paper looks at threat mitigation approaches in IoT using an autonomic taxonomy and finally sets down future directions. & 2014 Published by Elsevier Ltd.", "title": "" } ]
[ { "docid": "6c3a166eea824f588e3e3a135e2e7a30", "text": "BACKGROUND\nMobile health (mHealth) describes the use of portable electronic devices with software applications to provide health services and manage patient information. With approximately 5 billion mobile phone users globally, opportunities for mobile technologies to play a formal role in health services, particularly in low- and middle-income countries, are increasingly being recognized. mHealth can also support the performance of health care workers by the dissemination of clinical updates, learning materials, and reminders, particularly in underserved rural locations in low- and middle-income countries where community health workers deliver integrated community case management to children sick with diarrhea, pneumonia, and malaria.\n\n\nOBJECTIVE\nOur aim was to conduct a thematic review of how mHealth projects have approached the intersection of cellular technology and public health in low- and middle-income countries and identify the promising practices and experiences learned, as well as novel and innovative approaches of how mHealth can support community health workers.\n\n\nMETHODS\nIn this review, 6 themes of mHealth initiatives were examined using information from peer-reviewed journals, websites, and key reports. Primary mHealth technologies reviewed included mobile phones, personal digital assistants (PDAs) and smartphones, patient monitoring devices, and mobile telemedicine devices. We examined how these tools could be used for education and awareness, data access, and for strengthening health information systems. We also considered how mHealth may support patient monitoring, clinical decision making, and tracking of drugs and supplies. Lessons from mHealth trials and studies were summarized, focusing on low- and middle-income countries and community health workers.\n\n\nRESULTS\nThe review revealed that there are very few formal outcome evaluations of mHealth in low-income countries. Although there is vast documentation of project process evaluations, there are few studies demonstrating an impact on clinical outcomes. There is also a lack of mHealth applications and services operating at scale in low- and middle-income countries. The most commonly documented use of mHealth was 1-way text-message and phone reminders to encourage follow-up appointments, healthy behaviors, and data gathering. Innovative mHealth applications for community health workers include the use of mobile phones as job aides, clinical decision support tools, and for data submission and instant feedback on performance.\n\n\nCONCLUSIONS\nWith partnerships forming between governments, technologists, non-governmental organizations, academia, and industry, there is great potential to improve health services delivery by using mHealth in low- and middle-income countries. As with many other health improvement projects, a key challenge is moving mHealth approaches from pilot projects to national scalable programs while properly engaging health workers and communities in the process. By harnessing the increasing presence of mobile phones among diverse populations, there is promising evidence to suggest that mHealth can be used to deliver increased and enhanced health care services to individuals and communities, while helping to strengthen health systems.", "title": "" }, { "docid": "7d197033396c7a55593da79a5a70fa96", "text": "1. Introduction Fundamental questions about weighting (Fig 1) seem to be ~ most common during the analysis of survey data and I encounter them almost every week. Yet we \"lack a single, reasonably comprehensive, introductory explanation of the process of weighting\" [Sharot 1986], readily available to and usable by survey practitioners, who are looking for simple guidance, and this paper aims to meet some of that need. Some partial treatments have appeared in the survey literature [e.g., Kish 1965], but the topic seldom appears even in the indexes. However, we can expect growing interest, as witnessed by six publications since 1987 listed in the references.", "title": "" }, { "docid": "7b6e811ea3f227c33755049355949eaf", "text": "We revisit the task of learning a Euclidean metric from data. We approach this problem from first principles and formulate it as a surprisingly simple optimization problem. Indeed, our formulation even admits a closed form solution. This solution possesses several very attractive propertie s: (i) an innate geometric appeal through the Riemannian geometry of positive definite matrices; (ii) ease of interpretability; and (iii) computational speed several orders of magnitude faster tha n the widely used LMNN and ITML methods. Furthermore, on standard benchmark datasets, our closed-form solution consist ently attains higher classification accuracy.", "title": "" }, { "docid": "766dd6c18f645d550d98f6e3e86c7b2f", "text": "Licorice root has been used for years to regulate gastrointestinal function in traditional Chinese medicine. This study reveals the gastrointestinal effects of isoliquiritigenin, a flavonoid isolated from the roots of Glycyrrhiza glabra (a kind of Licorice). In vivo, isoliquiritigenin produced a dual dose-related effect on the charcoal meal travel, inhibitory at the low doses, while prokinetic at the high doses. In vitro, isoliquiritigenin showed an atropine-sensitive concentration-dependent spasmogenic effect in isolated rat stomach fundus. However, a spasmolytic effect was observed in isolated rabbit jejunums, guinea pig ileums and atropinized rat stomach fundus, either as noncompetitive inhibition of agonist concentration-response curves, inhibition of high K(+) (80 mM)-induced contractions, or displacement of Ca(2+) concentration-response curves to the right, indicating a calcium antagonist effect. Pretreatment with N(omega)-nitro-L-arginine methyl ester (L-NAME; 30 microM), indomethacin (10 microM), methylene blue (10 microM), tetraethylammonium chloride (0.5 mM), glibenclamide (1 microM), 4-aminopyridine (0.1 mM), or clotrimazole (1 microM) did not inhibit the spasmolytic effect. These results indicate that isoliquiritigenin plays a dual role in regulating gastrointestinal motility, both spasmogenic and spasmolytic. The spasmogenic effect may involve the activating of muscarinic receptors, while the spasmolytic effect is predominantly due to blockade of the calcium channels.", "title": "" }, { "docid": "5acf896927ec23d1d11c53f92a4850da", "text": "Emergence of modern techniques for scientific data collection has resulted in large scale accumulation of data pertaining to diverse fields. Conventional database querying methods are inadequate to extract useful information from huge data banks. Cluster analysis is a primary method for database mining [8]. It is either used as a stand-alone tool to get insight into the distribution of a data set or as a pre-processing step for other algorithms operating on the detected clusters. Almost all of the wellknown clustering algorithms require input parameters which are hard to determine but have a significant influence on the clustering result. Furthermore, for many real-data sets there does not even exist a global parameter setting for which the result of the clustering algorithm describes the intrinsic clustering structure accurately [1], [2]. DBSCAN (Density Based Spatial Clustering of Application with Noise) [1] is a base algorithm for density based clustering techniques. This paper gives a survey of density based clustering algorithms with the proposed enhanced algorithm that automatically selects the input parameters along with its implementation and comparison with the existing DBSCAN algorithm. The experimental results shows that the proposed algorithm can detect the clusters of varied density with different shapes and sizes from large amount of data which contains noise and outliers, requires only one input parameters and gives better output then the DBSCAN algorithm. KeywordsClustering Algorithms, Data mining, DBSCAN, Density, Eps, Minpts, and VDBSCAN.", "title": "" }, { "docid": "b717cd61178ba093026fca5fad62248d", "text": "This paper proposes a new low power and low area 4x4 array multiplier designed using modified Gate diffusion Input (GDI) technique. By using GDI cell, the transistor count is greatly reduced. Basic GDI technique shows a drawback of low voltage swing at output which prevents it for use in multiple stage circuits efficiently. We have used modified GDI technique which shows full swing output and hence can be used in multistage circuits. The whole design is made and simulated in 180nm UMC technology at a supply voltage of 1.8V using Cadence Virtuoso Environment.", "title": "" }, { "docid": "616ffe5c6cbb6a32a14042d52bd410d3", "text": "In the demo, we demonstrate a mobile food recognition system with Fisher Vector and liner one-vs-rest SVMs which enable us to record our food habits easily. In the experiments with 100 kinds of food categories, we have achieved the 79.2% classification rate for the top 5 category candidates when the ground-truth bounding boxes are given. The prototype system is open to the public as an Android-based smart-", "title": "" }, { "docid": "073486fe6bcd756af5f5325b27c57912", "text": "This paper describes the case of a unilateral agraphic patient (GG) who makes letter substitutions only when writing letters and words with his dominant left hand. Accuracy is significantly greater when he is writing with his right hand and when he is asked to spell words orally. GG also makes case errors when writing letters, and will sometimes write words in mixed case. However, these allograph errors occur regardless of which hand he is using to write. In terms of cognitive models of peripheral dysgraphia (e.g., Ellis, 1988), it appears that he has an allograph level impairment that affects writing with both hands, and a separate problem in accessing graphic motor patterns that disrupts writing with the left hand only. In previous studies of left-handed patients with unilateral agraphia (Zesiger & Mayer, 1992; Zesiger, Pegna, & Rilliet, 1994), it has been suggested that allographic knowledge used for writing with both hands is stored exclusively in the left hemisphere, but that graphic motor patterns are represented separately in each hemisphere. The pattern of performance demonstrated by GG strongly supports such a conclusion.", "title": "" }, { "docid": "c47fde74be75b5e909d7657bb64bf23d", "text": "As the primary stakeholder for the Enterprise Architecture, the Chief Information Officer (CIO) is responsible for the evolution of the enterprise IT system. An important part of the CIO role is therefore to make decisions about strategic and complex IT matters. This paper presents a cost effective and scenariobased approach for providing the CIO with an accurate basis for decision making. Scenarios are analyzed and compared against each other by using a number of problem-specific easily measured system properties identified in literature. In order to test the usefulness of the approach, a case study has been carried out. A CIO needed guidance on how to assign functionality and data within four overlapping systems. The results are quantifiable and can be presented graphically, thus providing a cost-efficient and easily understood basis for decision making. The study shows that the scenario-based approach can make complex Enterprise Architecture decisions understandable for CIOs and other business-orientated stakeholders", "title": "" }, { "docid": "f267b329f52628d3c52a8f618485ae95", "text": "We present an approach to continuous American Sign Language (ASL) recognition, which uses as input three-dimensional data of arm motions. We use computer vision methods for three-dimensional object shape and motion parameter extraction and an Ascension Technologies Flock of Birds interchangeably to obtain accurate three-dimensional movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for Hidden Markov Models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results.", "title": "" }, { "docid": "32ca9711622abd30c7c94f41b91fa3f6", "text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.", "title": "" }, { "docid": "4dc015d3400673bfd3e9ab7d60352e33", "text": "We describe work that is part of a research project on static code analysis between the Alexandru Ioan Cuza University and Bitdefender. The goal of the project is to develop customized static analysis tools for detecting potential vulnerabilities in C/C++ code. We have so far benchmarked several existing static analysis tools for C/C++ against the Toyota ITC test suite in order to determine which tools are best suited to our purpose. We discuss and compare several quality indicators such as precision, recall and running time of the tools. We analyze which tools perform best for various categories of potential vulnerabilities such as buffer overflows, integer overflow, etc.", "title": "" }, { "docid": "85cfda0c6a2964d342035b45d2ad47ab", "text": "Distributed Denial of Service (DDoS) attacks grow rapidly and become one of the fatal threats to the Internet. Automatically detecting DDoS attack packets is one of the main defense mechanisms. Conventional solutions monitor network traffic and identify attack activities from legitimate network traffic based on statistical divergence. Machine learning is another method to improve identifying performance based on statistical features. However, conventional machine learning techniques are limited by the shallow representation models. In this paper, we propose a deep learning based DDoS attack detection approach (DeepDefense). Deep learning approach can automatically extract high-level features from low-level ones and gain powerful representation and inference. We design a recurrent deep neural network to learn patterns from sequences of network traffic and trace network attack activities. The experimental results demonstrate a better performance of our model compared with conventional machine learning models. We reduce the error rate from 7.517% to 2.103% compared with conventional machine learning method in the larger data set.", "title": "" }, { "docid": "046df1ccbc545db05d0d91fe8f73d64a", "text": "Precise models of the robot inverse dynamics allow the design of significantly more accurate, energy-efficient and more compliant robot control. However, in some cases the accuracy of rigidbody models does not suffice for sound control performance due to unmodeled nonlinearities arising from hydraulic cable dynamics, complex friction or actuator dynamics. In such cases, estimating the inverse dynamics model from measured data poses an interesting alternative. Nonparametric regression methods, such as Gaussian process regression (GPR) or locally weighted projection regression (LWPR), are not as restrictive as parametric models and, thus, offer a more flexible framework for approximating unknown nonlinearities. In this paper, we propose a local approximation to the standard GPR, called local GPR (LGP), for real-time model online-learning by combining the strengths of both regression methods, i.e., the high accuracy of GPR and the fast speed of LWPR. The approach is shown to have competitive learning performance for high-dimensional data while being sufficiently fast for real-time learning. The effectiveness of LGP is exhibited by a comparison with the state-of-the-art regression techniques, such as GPR, LWPR and ν-SVR. The applicability of the proposed LGP method is demonstrated by real-time online-learning of the inverse dynamics model for robot model-based control on a Barrett WAM robot arm.", "title": "" }, { "docid": "16924ee2e6f301d962948884eeafc934", "text": "Companies have realized they need to hire data scientists, academic institutions are scrambling to put together data-science programs, and publications are touting data science as a hot-even \"sexy\"-career choice. However, there is confusion about what exactly data science is, and this confusion could lead to disillusionment as the concept diffuses into meaningless buzz. In this article, we argue that there are good reasons why it has been hard to pin down exactly what is data science. One reason is that data science is intricately intertwined with other important concepts also of growing importance, such as big data and data-driven decision making. Another reason is the natural tendency to associate what a practitioner does with the definition of the practitioner's field; this can result in overlooking the fundamentals of the field. We believe that trying to define the boundaries of data science precisely is not of the utmost importance. We can debate the boundaries of the field in an academic setting, but in order for data science to serve business effectively, it is important (i) to understand its relationships to other important related concepts, and (ii) to begin to identify the fundamental principles underlying data science. Once we embrace (ii), we can much better understand and explain exactly what data science has to offer. Furthermore, only once we embrace (ii) should we be comfortable calling it data science. In this article, we present a perspective that addresses all these concepts. We close by offering, as examples, a partial list of fundamental principles underlying data science.", "title": "" }, { "docid": "edfc9cb39fe45a43aed78379bafa2dfc", "text": "We propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally in the system design of wireless multi-user interfering systems. Our main contributions are i) the development of the first class of (inexact) Jacobi best-response algorithms with provable convergence, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem; ii) the derivation of a general dynamic pricing mechanism that provides a unified view of existing pricing schemes that are based, instead, on heuristics; and iii) a framework that can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad hoc methods proposed for very specific problems. Interestingly, our framework contains as special cases well-known gradient algorithms for nonconvex sum-utility problems, and many block-coordinate descent schemes for convex functions.", "title": "" }, { "docid": "2be9c1580e78d4c3f9c1e2fe115a89bc", "text": "Robotic devices have been shown to be efficacious in the delivery of therapy to treat upper limb motor impairment following stroke. However, the application of this technology to other types of neurological injury has been limited to case studies. In this paper, we present a multi degree of freedom robotic exoskeleton, the MAHI Exo II, intended for rehabilitation of the upper limb following incomplete spinal cord injury (SCI). We present details about the MAHI Exo II and initial findings from a clinical evaluation of the device with eight subjects with incomplete SCI who completed a multi-session training protocol. Clinical assessments show significant gains when comparing pre- and post-training performance in functional tasks. This paper explores a range of robotic measures capturing movement quality and smoothness that may be useful in tracking performance, providing as feedback to the subject, or incorporating into an adaptive training protocol. Advantages and disadvantages of the various investigated measures are discussed with regard to the type of movement segmentation that can be applied to the data collected during unassisted movements where the robot is backdriven and encoder data is recorded for post-processing.", "title": "" }, { "docid": "11ae42bedc18dedd0c29004000a4ec00", "text": "A hand injury can have great impact on a person's daily life. However, the current manual evaluations of hand functions are imprecise and inconvenient. In this research, a data glove embedded with 6-axis inertial sensors is proposed. With the proposed angle calculating algorithm, accurate bending angles are measured to estimate the real-time movements of hands. This proposed system can provide physicians with an efficient tool to evaluate the recovery of patients and improve the quality of hand rehabilitation.", "title": "" }, { "docid": "9cdddf98d24d100c752ea9d2b368bb77", "text": "Using predictive models to identify patterns that can act as biomarkers for different neuropathoglogical conditions is becoming highly prevalent. In this paper, we consider the problem of Autism Spectrum Disorder (ASD) classification where previous work has shown that it can be beneficial to incorporate a wide variety of meta features, such as socio-cultural traits, into predictive modeling. A graph-based approach naturally suits these scenarios, where a contextual graph captures traits that characterize a population, while the specific brain activity patterns are utilized as a multivariate signal at the nodes. Graph neural networks have shown improvements in inferencing with graph-structured data. Though the underlying graph strongly dictates the overall performance, there exists no systematic way of choosing an appropriate graph in practice, thus making predictive models non-robust. To address this, we propose a bootstrapped version of graph convolutional neural networks (G-CNNs) that utilizes an ensemble of weakly trained G-CNNs, and reduce the sensitivity of models on the choice of graph construction. We demonstrate its effectiveness on the challenging Autism Brain Imaging Data Exchange (ABIDE) dataset and show that our approach improves upon recently proposed graph-based neural networks. We also show that our method remains more robust to noisy graphs.", "title": "" }, { "docid": "83ccee768c29428ea8a575b2e6faab7d", "text": "Audio-based cough detection has become more pervasive in recent years because of its utility in evaluating treatments and the potential to impact the quality of life for individuals with chronic cough. We critically examine the current state of the art in cough detection, concluding that existing approaches expose private audio recordings of users and bystanders. We present a novel algorithm for detecting coughs from the audio stream of a mobile phone. Our system allows cough sounds to be reconstructed from the feature set, but prevents speech from being reconstructed intelligibly. We evaluate our algorithm on data collected in the wild and report an average true positive rate of 92% and false positive rate of 0.5%. We also present the results of two psychoacoustic experiments which characterize the tradeoff between the fidelity of reconstructed cough sounds and the intelligibility of reconstructed speech.", "title": "" } ]
scidocsrr
de5020aeb456aef4b030eff5dffe5f7f
Air quality data clustering using EPLS method
[ { "docid": "ff1cc31ab089d5d1d09002866c7dc043", "text": "In almost every scientific field, measurements are performed over time. These observations lead to a collection of organized data called time series. The purpose of time-series data mining is to try to extract all meaningful knowledge from the shape of data. Even if humans have a natural capacity to perform these tasks, it remains a complex problem for computers. In this article we intend to provide a survey of the techniques applied for time-series data mining. The first part is devoted to an overview of the tasks that have captured most of the interest of researchers. Considering that in most cases, time-series task relies on the same components for implementation, we divide the literature depending on these common aspects, namely representation techniques, distance measures, and indexing methods. The study of the relevant literature has been categorized for each individual aspects. Four types of robustness could then be formalized and any kind of distance could then be classified. Finally, the study submits various research trends and avenues that can be explored in the near future. We hope that this article can provide a broad and deep understanding of the time-series data mining research field.", "title": "" }, { "docid": "6ca20939907ffe75d5c0125b87abecf3", "text": "Multi-label learning studies the problem where each example is represented by a single instance while associated with a set of labels simultaneously. During the past decade, significant amount of progresses have been made toward this emerging machine learning paradigm. This paper aims to provide a timely review on this area with emphasis on state-of-the-art multi-label learning algorithms. Firstly, fundamentals on multi-label learning including formal definition and evaluation metrics are given. Secondly and primarily, eight representative multi-label learning algorithms are scrutinized under common notations with relevant analyses and discussions. Thirdly, several related learning settings are briefly summarized. As a conclusion, online resources and open research problems on multi-label learning are outlined for reference purposes.", "title": "" }, { "docid": "b52da336c6d70923a1c4606f5076a3ba", "text": "Given the recent explosion of interest in streaming data and online algorithms, clustering of time-series subsequences, extracted via a sliding window, has received much attention. In this work, we make a surprising claim. Clustering of time-series subsequences is meaningless. More concretely, clusters extracted from these time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature. We can justify calling our claim surprising because it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work. Although the primary contribution of our work is to draw attention to the fact that an apparent solution to an important problem is incorrect and should no longer be used, we also introduce a novel method that, based on the concept of time-series motifs, is able to meaningfully cluster subsequences on some time-series datasets.", "title": "" } ]
[ { "docid": "8dce819cc31cf4899cf4bad2dd117dc1", "text": "BACKGROUND\nCaffeine and sodium bicarbonate ingestion have been suggested to improve high-intensity intermittent exercise, but it is unclear if these ergogenic substances affect performance under provoked metabolic acidification. To study the effects of caffeine and sodium bicarbonate on intense intermittent exercise performance and metabolic markers under exercise-induced acidification, intense arm-cranking exercise was performed prior to intense intermittent running after intake of placebo, caffeine and sodium bicarbonate.\n\n\nMETHODS\nMale team-sports athletes (n = 12) ingested sodium bicarbonate (NaHCO3; 0.4 g.kg(-1) b.w.), caffeine (CAF; 6 mg.kg(-1) b.w.) or placebo (PLA) on three different occasions. Thereafter, participants engaged in intense arm exercise prior to the Yo-Yo intermittent recovery test level-2 (Yo-Yo IR2). Heart rate, blood lactate and glucose as well as rating of perceived exertion (RPE) were determined during the protocol.\n\n\nRESULTS\nCAF and NaHCO3 elicited a 14 and 23% improvement (P < 0.05), respectively, in Yo-Yo IR2 performance, post arm exercise compared to PLA. The NaHCO3 trial displayed higher [blood lactate] (P < 0.05) compared to CAF and PLA (10.5 ± 1.9 vs. 8.8 ± 1.7 and 7.7 ± 2.0 mmol.L(-1), respectively) after the Yo-Yo IR2. At exhaustion CAF demonstrated higher (P < 0.05) [blood glucose] compared to PLA and NaHCO3 (5.5 ± 0.7 vs. 4.2 ± 0.9 vs. 4.1 ± 0.9 mmol.L(-1), respectively). RPE was lower (P < 0.05) during the Yo-Yo IR2 test in the NaHCO3 trial in comparison to CAF and PLA, while no difference in heart rate was observed between trials.\n\n\nCONCLUSIONS\nCaffeine and sodium bicarbonate administration improved Yo-Yo IR2 performance and lowered perceived exertion after intense arm cranking exercise, with greater overall effects of sodium bicarbonate intake.", "title": "" }, { "docid": "acbb920f48119857f598388a39cdebb6", "text": "Quantitative analyses in landscape ecology have traditionally been dominated by the patch-mosaic concept in which landscapes are modeled as a mosaic of discrete patches. This model is useful for analyzing categorical data but cannot sufficiently account for the spatial heterogeneity present in continuous landscapes. Sub-pixel remote sensing classifications offer a potential data source for capturing continuous spatial heterogeneity but lack discrete land cover classes and therefore cannot be analyzed using standard landscape metric tools. This research introduces the threshold gradient method to allow transformation of continuous sub-pixel classifications into a series of discrete maps based on land cover proportion (i.e., intensity) that can be analyzed using landscape metric tools. Sub-pixel data are reclassified at multiple thresholds along a land cover continuum and landscape metrics are computed for each map. Metrics are plotted in response to intensity and these ‘scalograms’ are mathematically modeled using curve fitting techniques to allow determination of critical land cover thresholds (e.g., inflection points) where considerable landscape changes are occurring. Results show that critical land cover intensities vary between metrics, and the approach can generate increased ecological information not available with other landscape characterization methods.", "title": "" }, { "docid": "a8ff130dcb899214da73f66e12a5a1b1", "text": "We designed and evaluated an assumption-free, deep learning-based methodology for animal health monitoring, specifically for the early detection of respiratory disease in growing pigs based on environmental sensor data. Two recurrent neural networks (RNNs), each comprising gated recurrent units (GRUs), were used to create an autoencoder (GRU-AE) into which environmental data, collected from a variety of sensors, was processed to detect anomalies. An autoencoder is a type of network trained to reconstruct the patterns it is fed as input. By training the GRU-AE using environmental data that did not lead to an occurrence of respiratory disease, data that did not fit the pattern of \"healthy environmental data\" had a greater reconstruction error. All reconstruction errors were labelled as either normal or anomalous using threshold-based anomaly detection optimised with particle swarm optimisation (PSO), from which alerts are raised. The results from the GRU-AE method outperformed state-of-the-art techniques, raising alerts when such predictions deviated from the actual observations. The results show that a change in the environment can result in occurrences of pigs showing symptoms of respiratory disease within 1⁻7 days, meaning that there is a period of time during which their keepers can act to mitigate the negative effect of respiratory diseases, such as porcine reproductive and respiratory syndrome (PRRS), a common and destructive disease endemic in pigs.", "title": "" }, { "docid": "632f42f71b09f4dea40bc1cccd2d9604", "text": "The phenomenon of radicalization is investigated within a mixed population composed of core and sensitive subpopulations. The latest includes first to third generation immigrants. Respective ways of life may be partially incompatible. In case of a conflict core agents behave as inflexible about the issue. In contrast, sensitive agents can decide either to live peacefully adjusting their way of life to the core one, or to oppose it with eventually joining violent activities. The interplay dynamics between peaceful and opponent sensitive agents is driven by pairwise interactions. These interactions occur both within the sensitive population and by mixing with core agents. The update process is monitored using a Lotka-Volterra-like Ordinary Differential Equation. Given an initial tiny minority of opponents that coexist with both inflexible and peaceful agents, we investigate implications on the emergence of radicalization. Opponents try to turn peaceful agents to opponents driving radicalization. However, inflexible core agents may step in to bring back opponents to a peaceful choice thus weakening the phenomenon. The required minimum individual core involvement to actually curb radicalization is calculated. It is found to be a function of both the majority or minority status of the sensitive subpopulation with respect to the core subpopulation and the degree of activeness of opponents. The results highlight the instrumental role core agents can have to hinder radicalization within the sensitive subpopulation. Some hints are outlined to favor novel public policies towards social integration.", "title": "" }, { "docid": "eee5ffff364575afad1dcebbf169777b", "text": "In this paper, we proposed the multiclass support vector machine (SVM) with the error-correcting output codes for the multiclass electroencephalogram (EEG) signals classification problem. The probabilistic neural network (PNN) and multilayer perceptron neural network were also tested and benchmarked for their performance on the classification of the EEG signals. Decision making was performed in two stages: feature extraction by computing the wavelet coefficients and the Lyapunov exponents and classification using the classifiers trained on the extracted features. The purpose was to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. Our research demonstrated that the wavelet coefficients and the Lyapunov exponents are the features which well represent the EEG signals and the multiclass SVM and PNN trained on these features achieved high classification accuracies", "title": "" }, { "docid": "89263084f29469d1c363da55c600a971", "text": "Today when there are more than 1 billion Android users all over the world, it shows that its popularity has no equal. These days mobile phones have become so intrusive in our daily lives that when they needed can give huge amount of information to forensic examiners. Till the date of writing this paper there are many papers citing the need of mobile device forensic and ways of getting the vital artifacts through mobile devices for different purposes. With vast options of popular and less popular forensic tools and techniques available today, this papers aims to bring them together under a comparative study so that this paper could serve as a starting point for several android users, future forensic examiners and investigators. During our survey we found scarcity for papers on tools for android forensic. In this paper we have analyzed different tools and techniques used in android forensic and at the end tabulated the results and findings.", "title": "" }, { "docid": "40cb853a6ca202fa74f1838673421107", "text": "The analytics platform at Twitter has experienced tremendous growth over the past few years in terms of size, complexity, number of users, and variety of use cases. In this paper, we discuss the evolution of our infrastructure and the development of capabilities for data mining on \"big data\". One important lesson is that successful big data mining in practice is about much more than what most academics would consider data mining: life \"in the trenches\" is occupied by much preparatory work that precedes the application of data mining algorithms and followed by substantial effort to turn preliminary models into robust solutions. In this context, we discuss two topics: First, schemas play an important role in helping data scientists understand petabyte-scale data stores, but they're insufficient to provide an overall \"big picture\" of the data available to generate insights. Second, we observe that a major challenge in building data analytics platforms stems from the heterogeneity of the various components that must be integrated together into production workflows---we refer to this as \"plumbing\". This paper has two goals: For practitioners, we hope to share our experiences to flatten bumps in the road for those who come after us. For academic researchers, we hope to provide a broader context for data mining in production environments, pointing out opportunities for future work.", "title": "" }, { "docid": "864d1c5a2861acc317f9f2a37c6d3660", "text": "We report a case of an 8-month-old child with a primitive myxoid mesenchymal tumor of infancy arising in the thenar eminence. The lesion recurred after conservative excision and was ultimately nonresponsive to chemotherapy, necessitating partial amputation. The patient remains free of disease 5 years after this radical surgery. This is the 1st report of such a tumor since it was initially described by Alaggio and colleagues in 2006. The pathologic differential diagnosis is discussed.", "title": "" }, { "docid": "cb7a9b816fc1b83670cb9fb377974e5d", "text": "BACKGROUND\nCare attendants constitute the main workforce in nursing homes, but their heavy workload, low autonomy, and indefinite responsibility result in high levels of stress and may affect quality of care. However, few studies have focused of this problem.\n\n\nOBJECTIVES\nThe aim of this study was to examine work-related stress and associated factors that affect care attendants in nursing homes and to offer suggestions for how management can alleviate these problems in care facilities.\n\n\nMETHODS\nWe recruited participants from nine nursing homes with 50 or more beds located in middle Taiwan; 110 care attendants completed the questionnaire. The work stress scale for the care attendants was validated and achieved good reliability (Cronbach's alpha=0.93). We also conducted exploratory factor analysis.\n\n\nRESULTS\nSix factors were extracted from the work stress scale: insufficient ability, stressful reactions, heavy workload, trouble in care work, poor management, and working time problems. The explained variance achieved 64.96%. Factors related to higher work stress included working in a hospital-based nursing home, having a fixed schedule, night work, feeling burden, inconvenient facility, less enthusiasm, and self-rated higher stress.\n\n\nCONCLUSION\nWork stress for care attendants in nursing homes is related to human resource management and quality of care. We suggest potential management strategies to alleviate work stress for these workers.", "title": "" }, { "docid": "042431e96028ed9729e6b174a78d642d", "text": "We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.", "title": "" }, { "docid": "a4788b60b0fc16551f03557483a8a532", "text": "The rapid growth in the population density in urban cities demands tolerable provision of services and infrastructure. To meet the needs of city inhabitants. Thus, increase in the request for embedded devices, such as sensors, actuators, and smartphones, etc., which is providing a great business potential towards the new era of Internet of Things (IoT); in which all the devices are capable of interconnecting and communicating with each other over the Internet. Therefore, the Internet technologies provide a way towards integrating and sharing a common communication medium. Having such knowledge, in this paper, we propose a combined IoT-based system for smart city development and urban planning using Big Data analytics. We proposed a complete system, which consists of various types of sensors deployment including smart home sensors, vehicular networking, weather and water sensors, smart parking sensors, and surveillance objects, etc. A four-tier architecture is proposed which include 1) Bottom Tier-1: which is responsible for IoT sources, data generations, and collections 2) Intermediate Tier-1: That is responsible for all type of communication between sensors, relays, base stations, the internet, etc. 3) Intermediate Tier 2: it is responsible for data management and processing using Hadoop framework, and 4) Top tier: is responsible for application and usage of the data analysis and results generated. The system implementation consists of various steps that start from data generation and collecting, aggregating, filtration, classification, preprocessing, computing and decision making. The proposed system is implemented using Hadoop with Spark, voltDB, Storm or S4 for real time processing of the IoT data to generate results in order to establish the smart city. For urban planning or city future development, the offline historical data is analyzed on Hadoop using MapReduce programming. IoT datasets generated by smart homes, smart parking weather, pollution, and vehicle data sets are used for analysis and evaluation. Such type of system with full functionalities does not exist. Similarly, the results show that the proposed system is more scalable and efficient than the existing systems. Moreover, the system efficiency is measured in term of throughput and processing time.", "title": "" }, { "docid": "58677916e11e6d5401b7396d117a517b", "text": "This work contributes to the development of a common framework for the discussion and analysis of dexterous manipulation across the human and robotic domains. An overview of previous work is first provided along with an analysis of the tradeoffs between arm and hand dexterity. A hand-centric and motion-centric manipulation classification is then presented and applied in four different ways. It is first discussed how the taxonomy can be used to identify a manipulation strategy. Then, applications for robot hand analysis and engineering design are explained. Finally, the classification is applied to three activities of daily living (ADLs) to distinguish the patterns of dexterous manipulation involved in each task. The same analysis method could be used to predict problem ADLs for various impairments or to produce a representative benchmark set of ADL tasks. Overall, the classification scheme proposed creates a descriptive framework that can be used to effectively describe hand movements during manipulation in a variety of contexts and might be combined with existing object centric or other taxonomies to provide a complete description of a specific manipulation task.", "title": "" }, { "docid": "5828218248b4da8991b18dc698ef25ee", "text": "Little is known about the mechanisms of smartphone features that are used in sealing relationships between psychopathology and problematic smartphone use. Our purpose was to investigate two specific smartphone usage types e process use and social use e for associations with depression and anxiety; and in accounting for relationships between anxiety/depression and problematic smartphone use. Social smartphone usage involves social feature engagement (e.g., social networking, messaging), while process usage involves non-social feature engagement (e.g., news consumption, entertainment, relaxation). 308 participants from Amazon's Mechanical Turk internet labor market answered questionnaires about their depression and anxiety symptoms, and problematic smartphone use along with process and social smartphone use dimensions. Statistically adjusting for age and sex, we discovered the association between anxiety symptoms was stronger with process versus social smartphone use. Depression symptom severity was negatively associated with greater social smartphone use. Process smartphone use was more strongly associated with problematic smartphone use. Finally, process smartphone use accounted for relationships between anxiety severity and problematic smartphone use. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8ef58dee2a9cbda23f642cb07bed013b", "text": "Audio information plays a rather important role in the increasing digital content that is available today, resulting in a need for methodologies that automatically analyze such content: audio event recognition for home automations and surveillance systems, speech recognition, music information retrieval, multimodal analysis (e.g. audio-visual analysis of online videos for content-based recommendation), etc. This paper presents pyAudioAnalysis, an open-source Python library that provides a wide range of audio analysis procedures including: feature extraction, classification of audio signals, supervised and unsupervised segmentation and content visualization. pyAudioAnalysis is licensed under the Apache License and is available at GitHub (https://github.com/tyiannak/pyAudioAnalysis/). Here we present the theoretical background behind the wide range of the implemented methodologies, along with evaluation metrics for some of the methods. pyAudioAnalysis has been already used in several audio analysis research applications: smart-home functionalities through audio event detection, speech emotion recognition, depression classification based on audio-visual features, music segmentation, multimodal content-based movie recommendation and health applications (e.g. monitoring eating habits). The feedback provided from all these particular audio applications has led to practical enhancement of the library.", "title": "" }, { "docid": "6936462dee2424b92c7476faed5b5a23", "text": "A significant challenge in scene text detection is the large variation in text sizes. In particular, small text are usually hard to detect. This paper presents an accurate oriented text detector based on Faster R-CNN. We observe that Faster R-CNN is suitable for general object detection but inadequate for scene text detection due to the large variation in text size. We apply feature fusion both in RPN and Fast R-CNN to alleviate this problem and furthermore, enhance model's ability to detect relatively small text. Our text detector achieves comparable results to those state of the art methods on ICDAR 2015 and MSRA-TD500, showing its advantage and applicability.", "title": "" }, { "docid": "17676785398d4ed24cc04cb3363a7596", "text": "Generative models (GMs) such as Generative Adversary Network (GAN) and Variational Auto-Encoder (VAE) have thrived these years and achieved high quality results in generating new samples. Especially in Computer Vision, GMs have been used in image inpainting, denoising and completion, which can be treated as the inference from observed pixels to corrupted pixels. However, images are hierarchically structured which are quite different from many real-world inference scenarios with non-hierarchical features. These inference scenarios contain heterogeneous stochastic variables and irregular mutual dependences. Traditionally they are modeled by Bayesian Network (BN). However, the learning and inference of BN model are NP-hard thus the number of stochastic variables in BN is highly constrained. In this paper, we adapt typical GMs to enable heterogeneous learning and inference in polynomial time. We also propose an extended autoregressive (EAR) model and an EAR with adversary loss (EARA) model and give theoretical results on their effectiveness. Experiments on several BN datasets show that our proposed EAR model achieves the best performance in most cases compared to other GMs. Except for black box analysis, we’ve also done a serial of experiments on Markov border inference of GMs for white box analysis and give theoretical results.", "title": "" }, { "docid": "af7f83599c163d0f519f1e2636ae8d44", "text": "There is a set of characterological attributes thought to be associated with developing success at critical thinking (CT). This paper explores the disposition toward CT theoretically, and then as it appears to be manifest in college students. Factor analytic research grounded in a consensus-based conceptual analysis of CT described seven aspects of the overall disposition toward CT: truth-seeking, open-mindedness, analyticity, systematicity, CTconfidence, inquisitiveness, and cognitive maturity. The California Critical Thinking Disposition Inventory (CCTDI), developed in 1992, was used to sample college students at two comprehensive universities. Entering college freshman students showed strengths in openmindedness and inquisitiveness, weaknesses in systematicity and opposition to truth-seeking. Additional research indicates the disposition toward CT is highly correlated with the psychological constructs of absorption and openness to experience, and strongly predictive of ego-resiliency. A preliminary study explores the interesting and potentially complex interrelationship between the disposition toward CT and CT abilities. In addition to the significance of this work for psychological studies of human development, empirical research on the disposition toward CT promises important implications for all levels of education. 1 This essay appeared as Facione, PA, Sánchez, (Giancarlo) CA, Facione, NC & Gainen, J., (1995). The disposition toward critical thinking. Journal of General Education. Volume 44, Number(1). 1-25.", "title": "" }, { "docid": "b2ec062fd7a7a9b124f2663a2fb002cb", "text": "Major international projects are underway that are aimed at creating a comprehensive catalogue of all the genes responsible for the initiation and progression of cancer. These studies involve the sequencing of matched tumour–normal samples followed by mathematical analysis to identify those genes in which mutations occur more frequently than expected by random chance. Here we describe a fundamental problem with cancer genome studies: as the sample size increases, the list of putatively significant genes produced by current analytical methods burgeons into the hundreds. The list includes many implausible genes (such as those encoding olfactory receptors and the muscle protein titin), suggesting extensive false-positive findings that overshadow true driver events. We show that this problem stems largely from mutational heterogeneity and provide a novel analytical methodology, MutSigCV, for resolving the problem. We apply MutSigCV to exome sequences from 3,083 tumour–normal pairs and discover extraordinary variation in mutation frequency and spectrum within cancer types, which sheds light on mutational processes and disease aetiology, and in mutation frequency across the genome, which is strongly correlated with DNA replication timing and also with transcriptional activity. By incorporating mutational heterogeneity into the analyses, MutSigCV is able to eliminate most of the apparent artefactual findings and enable the identification of genes truly associated with cancer.", "title": "" }, { "docid": "647c10e242a4ceaecf218565e9b9675b", "text": "After 40 years of investigation, steady-state visually evoked potentials (SSVEPs) have been shown to be useful for many paradigms in cognitive (visual attention, binocular rivalry, working memory, and brain rhythms) and clinical neuroscience (aging, neurodegenerative disorders, schizophrenia, ophthalmic pathologies, migraine, autism, depression, anxiety, stress, and epilepsy). Recently, in engineering, SSVEPs found a novel application for SSVEP-driven brain-computer interface (BCI) systems. Although some SSVEP properties are well documented, many questions are still hotly debated. We provide an overview of recent SSVEP studies in neuroscience (using implanted and scalp EEG, fMRI, or PET), with the perspective of modern theories about the visual pathway. We investigate the steady-state evoked activity, its properties, and the mechanisms behind SSVEP generation. Next, we describe the SSVEP-BCI paradigm and review recently developed SSVEP-based BCI systems. Lastly, we outline future research directions related to basic and applied aspects of SSVEPs.", "title": "" }, { "docid": "595e68cfcf7b2606f42f2ad5afb9713a", "text": "Mammalian hibernators undergo a remarkable phenotypic switch that involves profound changes in physiology, morphology, and behavior in response to periods of unfavorable environmental conditions. The ability to hibernate is found throughout the class Mammalia and appears to involve differential expression of genes common to all mammals, rather than the induction of novel gene products unique to the hibernating state. The hibernation season is characterized by extended bouts of torpor, during which minimal body temperature (Tb) can fall as low as -2.9 degrees C and metabolism can be reduced to 1% of euthermic rates. Many global biochemical and physiological processes exploit low temperatures to lower reaction rates but retain the ability to resume full activity upon rewarming. Other critical functions must continue at physiologically relevant levels during torpor and be precisely regulated even at Tb values near 0 degrees C. Research using new tools of molecular and cellular biology is beginning to reveal how hibernators survive repeated cycles of torpor and arousal during the hibernation season. Comprehensive approaches that exploit advances in genomic and proteomic technologies are needed to further define the differentially expressed genes that distinguish the summer euthermic from winter hibernating states. Detailed understanding of hibernation from the molecular to organismal levels should enable the translation of this information to the development of a variety of hypothermic and hypometabolic strategies to improve outcomes for human and animal health.", "title": "" } ]
scidocsrr
7c0719b2936701c6e4ca5b3ed3cf2d91
Curating and contextualizing Twitter stories to assist with social newsgathering
[ { "docid": "463ef40777aaf14406186d5d4d99ba13", "text": "Social media is already a fixture for reporting for many journalists, especially around breaking news events where non-professionals may already be on the scene to share an eyewitness report, photo, or video of the event. At the same time, the huge amount of content posted in conjunction with such events serves as a challenge to finding interesting and trustworthy sources in the din of the stream. In this paper we develop and investigate new methods for filtering and assessing the verity of sources found through social media by journalists. We take a human centered design approach to developing a system, SRSR (\"Seriously Rapid Source Review\"), informed by journalistic practices and knowledge of information production in events. We then used the system, together with a realistic reporting scenario, to evaluate the filtering and visual cue features that we developed. Our evaluation offers insights into social media information sourcing practices and challenges, and highlights the role technology can play in the solution.", "title": "" } ]
[ { "docid": "7a6a1bf378f5bdfc6c373dc55cf0dabd", "text": "In this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints. At each iteration, workers asynchronously conduct greedy coordinate descent updates on a block of variables. In the first part of the paper, we analyze the theoretical behavior of Asy-GCD and prove a linear convergence rate. In the second part, we develop an efficient kernel SVM solver based on Asy-GCD in the shared memory multi-core setting. Since our algorithm is fully asynchronous—each core does not need to idle and wait for the other cores—the resulting algorithm enjoys good speedup and outperforms existing multi-core kernel SVM solvers including asynchronous stochastic coordinate descent and multi-core LIBSVM.", "title": "" }, { "docid": "e693e811edb2196baa1fd22b25246eaf", "text": "The chicken is an excellent model organism for studying vertebrate limb development, mainly because of the ease of manipulating the developing limb in vivo. Classical chicken embryology has provided fate maps and elucidated the cell-cell interactions that specify limb pattern. The first defined chemical that can mimic one of these interactions was discovered by experiments on developing chick limbs and, over the last 15 years or so, the role of an increasing number of developmentally important genes has been uncovered. The principles that underlie limb development in chickens are applicable to other vertebrates and there are growing links with clinical genetics. The sequence of the chicken genome, together with other recently assembled chicken genomic resources, will present new opportunities for exploiting the ease of manipulating the limb.", "title": "" }, { "docid": "394d96f18402c7033f27f5ead8219698", "text": "Today, online social networks in the World Wide Web become increasingly interactive and networked. Web 2.0 technologies provide a multitude of platforms, such as blogs, wikis, and forums where for example consumers can disseminate data about products and manufacturers. This data provides an abundance of information on personal experiences and opinions which are extremely relevant for companies and sales organizations. A new approach based on text mining and social network analysis is presented which allows detecting opinion leaders and opinion trends. This allows getting a better understanding of the opinion formation. The overall concept is presented and illustrated by an example.", "title": "" }, { "docid": "6ccad3fd0fea9102d15bd37306f5f562", "text": "This paper reviews deposition, integration, and device fabrication of ferroelectric PbZrxTi1−xO3 (PZT) films for applications in microelectromechanical systems. As examples, a piezoelectric ultrasonic micromotor and pyroelectric infrared detector array are presented. A summary of the published data on the piezoelectric properties of PZT thin films is given. The figures of merit for various applications are discussed. Some considerations and results on operation, reliability, and depolarization of PZT thin films are presented.", "title": "" }, { "docid": "2891ce3327617e9e957488ea21e9a20c", "text": "Recently, remote healthcare systems have received increasing attention in the last decade, explaining why intelligent systems with physiology signal monitoring for e-health care are an emerging area of development. Therefore, this study adopts a system which includes continuous collection and evaluation of multiple vital signs, long-term healthcare, and a cellular connection to a medical center in emergency case and it transfers all acquired raw data by the internet in normal case. The proposed system can continuously acquire four different physiological signs, for example, ECG, SpO2, temperature, and blood pressure and further relayed them to an intelligent data analysis scheme to diagnose abnormal pulses for exploring potential chronic diseases. The proposed system also has a friendly web-based interface for medical staff to observe immediate pulse signals for remote treatment. Once abnormal event happened or the request to real-time display vital signs is confirmed, all physiological signs will be immediately transmitted to remote medical server through both cellular networks and internet. Also data can be transmitted to a family member's mobile phone or doctor's phone through GPRS. A prototype of such system has been successfully developed and implemented, which will offer high standard of healthcare with a major reduction in cost for our society.", "title": "" }, { "docid": "b5831795da97befd3241b9d7d085a20f", "text": "Want to learn more about the background and concepts of Internet congestion control? This indispensable text draws a sketch of the future in an easily comprehensible fashion. Special attention is placed on explaining the how and why of congestion control mechanisms complex issues so far hardly understood outside the congestion control research community. A chapter on Internet Traffic Management from the perspective of an Internet Service Provider demonstrates how the theory of congestion control impacts on the practicalities of service delivery.", "title": "" }, { "docid": "ec07bddc8bdc96678eebf49c7ee3752e", "text": "This study aimed to assess the effects of core stability training on lower limbs' muscular asymmetries and imbalances in team sport. Twenty footballers were divided into two groups, either core stability or control group. Before each daily practice, core stability group (n = 10) performed a core stability training programme, while control group (n = 10) did a standard warm-up. The effects of the core stability training programme were assessed by performing isokinetic tests and single-leg countermovement jumps. Significant improvement was found for knee extensors peak torque at 3.14 rad · s(-1) (14%; P < 0.05), knee flexors peak torque at 1.05 and 3.14 rad · s(-1) (19% and 22% with P < 0.01 and P < 0.01, respectively) and peak torque flexors/extensors ratios at 1.05 and 3.14 rad · s(-1) (7.7% and 8.5% with P < 0.05 and P < 0.05, respectively) only in the core stability group. The jump tests showed a significant reduction in the strength asymmetries in core stability group (-71.4%; P = 0.02) while a concurrent increase was seen in the control group (33.3%; P < 0.05). This study provides practical evidence in combining core exercises for optimal lower limbs strength balance development in young soccer players.", "title": "" }, { "docid": "eece6349d77b415115fa6afbbbd85190", "text": "BACKGROUND\nAcute appendicitis is the most common cause of acute abdomen. Approximately 7% of the population will be affected by this condition during full life. The development of AIR score may contribute to diagnosis associating easy clinical criteria and two simple laboratory tests.\n\n\nAIM\nTo evaluate the score AIR (Appendicitis Inflammatory Response score) as a tool for the diagnosis and prediction of severity of acute appendicitis.\n\n\nMETHOD\nWere evaluated all patients undergoing surgical appendectomy. From 273 patients, 126 were excluded due to exclusion criteria. All patients were submitted o AIR score.\n\n\nRESULTS\nThe value of the C-reactive protein and the percentage of leukocytes segmented blood count showed a direct relationship with the phase of acute appendicitis.\n\n\nCONCLUSION\nAs for the laboratory criteria, serum C-reactive protein and assessment of the percentage of the polymorphonuclear leukocytes count were important to diagnosis and disease stratification.", "title": "" }, { "docid": "c1956e4c6b732fa6a420d4c69cfbe529", "text": "To improve the safety and comfort of a human-machine system, the machine needs to ‘know,’ in a real time manner, the human operator in the system. The machine’s assistance to the human can be fine tuned if the machine is able to sense the human’s state and intent. Related to this point, this paper discusses issues of human trust in automation, automation surprises, responsibility and authority. Examples are given of a driver assistance system for advanced automobile.", "title": "" }, { "docid": "3f5f8e75af4cc24e260f654f8834a76c", "text": "The Balanced Scorecard (BSC) methodology focuses on major critical issues of modern business organisations: the effective measurement of corporate performance and the evaluation of the successful implementation of corporate strategy. Despite the increased adoption of the BSC methodology by numerous business organisations during the last decade, limited case studies concern non-profit organisations (e.g. public sector, educational institutions, healthcare organisations, etc.). The main aim of this study is to present the development of a performance measurement system for public health care organisations, in the context of BSC methodology. The proposed approach considers the distinguished characteristics of the aforementioned sector (e.g. lack of competition, social character of organisations, etc.). The proposed measurement system contains the most important financial performance indicators, as well as non-financial performance indicators that are able to examine the quality of the provided services, the satisfaction of internal and external customers, the selfimprovement system of the organisation and the ability of the organisation to adapt and change. These indicators play the role of Key Performance Indicators (KPIs), in the context of BSC methodology. The presented analysis is based on a MCDA approach, where the UTASTAR method is used in order to aggregate the marginal performance of KPIs. This approach is able to take into account the preferences of the management of the organisation regarding the achievement of the defined strategic objectives. The main results of the proposed approach refer to the evaluation of the overall scores for each one of the main dimensions of the BSC methodology (i.e. financial, customer, internal business process, and innovation-learning). These results are able to help the organisation to evaluate and revise its strategy, and generally to adopt modern management approaches in every day practise. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a1df80a201943ad386a7836c7ba3ff94", "text": "This paper estimates the effect of air pollution on child hospitalizations for asthma using naturally occurring seasonal variations in pollution within zip codes. Of the pollutants considered, carbon monoxide (CO) has a significant effect on asthma for children ages 1-18: if 1998 pollution levels were at their 1992 levels, there would be a 5-14% increase in asthma admissions. Also, households respond to information about pollution with avoidance behavior, suggesting it is important to account for these endogenous responses when measuring the effect of pollution on health. Finally, the effect of pollution is greater for children of lower socio-economic status (SES), indicating that pollution is one potential mechanism by which SES affects health.", "title": "" }, { "docid": "78829447a6cbf0aa020ef098a275a16d", "text": "Black soldier fly (BSF), Hermetia illucens (L.) is widely used in bio-recycling of human food waste and manure of livestock. Eggs of BSF were commonly collected by egg-trapping technique for mass rearing. To find an efficient lure for BSF egg-trapping, this study compared the number of egg batch trapped by different lures, including fruit, food waste, chicken manure, pig manure, and dairy manure. The result showed that fruit wastes are the most efficient on trapping BSF eggs. To test the effects of fruit species, number of egg batch trapped by three different fruit species, papaya, banana, and pineapple were compared, and no difference were found among fruit species. Environmental factors including temperature, relative humidity, and light intensity were measured and compared in different study sites to examine their effects on egg-trapping. The results showed no differences on temperature, relative humidity, and overall light intensity between sites, but the stability of light environment differed between sites. BSF tend to lay more eggs in site with stable light environment.", "title": "" }, { "docid": "057621c670a9b7253ba829210c530dca", "text": "Actual challenges in production are individualization and short product lifecycles. To achieve this, the product development and the production planning must be accelerated. In some cases specialized production machines are engineered for automating production processes for a single product. Regarding the engineering of specialized production machines, there is often a sequential process starting with the mechanics, proceeding with the electrics and ending with the automation design. To accelerate this engineering process the different domains have to be parallelized as far as possible (Schlögl, 2008). Thereby the different domains start detailing in parallel after the definition of a common concept. The system integration follows the detailing with the objective to verify the system including the PLC-code. Regarding production machines, the system integration is done either by commissioning of the real machine or by validating the PLCcode against a model of the machine, so called virtual commissioning.", "title": "" }, { "docid": "ca4aa2c6f4096bbffaa2e3e1dd06fbe8", "text": "Hybrid unmanned aircraft, that combine hover capability with a wing for fast and efficient forward flight, have attracted a lot of attention in recent years. Many different designs are proposed, but one of the most promising is the tailsitter concept. However, tailsitters are difficult to control across the entire flight envelope, which often includes stalled flight. Additionally, their wing surface makes them susceptible to wind gusts. In this paper, we propose incremental nonlinear dynamic inversion control for the attitude and position control. The result is a single, continuous controller, that is able to track the acceleration of the vehicle across the flight envelope. The proposed controller is implemented on the Cyclone hybrid UAV. Multiple outdoor experiments are performed, showing that unmodeled forces and moments are effectively compensated by the incremental control structure, and that accelerations can be tracked across the flight envelope. Finally, we provide a comprehensive procedure for the implementation of the controller on other types of hybrid UAVs.", "title": "" }, { "docid": "eaf30f31b332869bc45ff1288c41da71", "text": "Search Engines: Information Retrieval In Practice is writen by Bruce Croft in English language. Release on 2009-02-16, this book has 552 page count that consist of helpful information with easy reading experience. The book was publish by Addison-Wesley, it is one of best subjects book genre that gave you everything love about reading. You can find Search Engines: Information Retrieval In Practice book with ISBN 0136072240.", "title": "" }, { "docid": "dce75562a7e8b02364d39fd7eb407748", "text": "The ability to predict future user activity is invaluable when it comes to content recommendation and personalization. For instance, knowing when users will return to an online music service and what they will listen to increases user satisfaction and therefore user retention.\n We present a model based on Long-Short Term Memory to estimate when a user will return to a site and what their future listening behavior will be. In doing so, we aim to solve the problem of Just-In-Time recommendation, that is, to recommend the right items at the right time. We use tools from survival analysis for return time prediction and exponential families for future activity analysis. We show that the resulting multitask problem can be solved accurately, when applied to two real-world datasets.", "title": "" }, { "docid": "b59c843d687a1dbed0ef1b891c314424", "text": "Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identification of the end-member signatures in the original data set may be challenging due to insufficient spatial resolution, mixtures happening at different scales, and unavailability of completely pure spectral signatures in the scene. However, the unmixing problem can also be approached in semisupervised fashion, i.e., by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance (e.g., spectra collected on the ground by a field spectroradiometer). Unmixing then amounts to finding the optimal subset of signatures in a (potentially very large) spectral library that can best model each mixed pixel in the scene. In practice, this is a combinatorial problem which calls for efficient linear sparse regression (SR) techniques based on sparsity-inducing regularizers, since the number of endmembers participating in a mixed pixel is usually very small compared with the (ever-growing) dimensionality (and availability) of spectral libraries. Linear SR is an area of very active research, with strong links to compressed sensing, basis pursuit (BP), BP denoising, and matching pursuit. In this paper, we study the linear spectral unmixing problem under the light of recent theoretical results published in those referred to areas. Furthermore, we provide a comparison of several available and new linear SR algorithms, with the ultimate goal of analyzing their potential in solving the spectral unmixing problem by resorting to available spectral libraries. Our experimental results, conducted using both simulated and real hyperspectral data sets collected by the NASA Jet Propulsion Laboratory's Airborne Visible Infrared Imaging Spectrometer and spectral libraries publicly available from the U.S. Geological Survey, indicate the potential of SR techniques in the task of accurately characterizing the mixed pixels using the library spectra. This opens new perspectives for spectral unmixing, since the abundance estimation process no longer depends on the availability of pure spectral signatures in the input data nor on the capacity of a certain endmember extraction algorithm to identify such pure signatures.", "title": "" }, { "docid": "956ffd90cc922e77632b8f9f79f42a98", "text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433", "title": "" }, { "docid": "589396a7c9dae0567f0bcd4d83461a6f", "text": "The risk of inadequate hand hygiene in food handling settings is exacerbated when water is limited or unavailable, thereby making washing with soap and water difficult. The SaniTwice method involves application of excess alcohol-based hand sanitizer (ABHS), hand \"washing\" for 15 s, and thorough cleaning with paper towels while hands are still wet, followed by a standard application of ABHS. This study investigated the effectiveness of the SaniTwice methodology as an alternative to hand washing for cleaning and removal of microorganisms. On hands moderately soiled with beef broth containing Escherichia coli (ATCC 11229), washing with a nonantimicrobial hand washing product achieved a 2.86 (±0.64)-log reduction in microbial contamination compared with the baseline, whereas the SaniTwice method with 62 % ethanol (EtOH) gel, 62 % EtOH foam, and 70 % EtOH advanced formula gel achieved reductions of 2.64 ± 0.89, 3.64 ± 0.57, and 4.61 ± 0.33 log units, respectively. When hands were heavily soiled from handling raw hamburger containing E. coli, washing with nonantimicrobial hand washing product and antimicrobial hand washing product achieved reductions of 2.65 ± 0.33 and 2.69 ± 0.32 log units, respectively, whereas SaniTwice with 62 % EtOH foam, 70 % EtOH gel, and 70 % EtOH advanced formula gel achieved reductions of 2.87 ± 0.42, 2.99 ± 0.51, and 3.92 ± 0.65 log units, respectively. These results clearly demonstrate that the in vivo antibacterial efficacy of the SaniTwice regimen with various ABHS is equivalent to or exceeds that of the standard hand washing approach as specified in the U.S. Food and Drug Administration Food Code. Implementation of the SaniTwice regimen in food handling settings with limited water availability should significantly reduce the risk of foodborne infections resulting from inadequate hand hygiene.", "title": "" }, { "docid": "cd55fc3fafe2618f743a845d89c3a796", "text": "According to the notation proposed by the International Federation for the Theory of Mechanisms and Machines IFToMM (Ionescu, 2003); a parallel manipulator is a mechanism where the motion of the end-effector, namely the moving or movable platform, is controlled by means of at least two kinematic chains. If each kinematic chain, also known popularly as limb or leg, has a single active joint, then the mechanism is called a fully-parallel mechanism, in which clearly the nominal degree of freedom equates the number of limbs. Tire-testing machines (Gough & Whitehall, 1962) and flight simulators (Stewart, 1965), appear to be the first transcendental applications of these complex mechanisms. Parallel manipulators, and in general mechanisms with parallel kinematic architectures, due to benefits --over their serial counterparts-such as higher stiffness and accuracy, have found interesting applications such as walking machines, pointing devices, multi-axis machine tools, micro manipulators, and so on. The pioneering contributions of Gough and Stewart, mainly the theoretical paper of Stewart (1965), influenced strongly the development of parallel manipulators giving birth to an intensive research field. In that way, recently several parallel mechanisms for industrial purposes have been constructed using the, now, classical hexapod as a base mechanism: Octahedral Hexapod HOH-600 (Ingersoll), HEXAPODE CMW 300 (CMW), Cosmo Center PM-600 (Okuma), F-200i (FANUC) and so on. On the other hand one cannot ignore that this kind of parallel kinematic structures have a limited and complex-shaped workspace. Furthermore, their rotation and position capabilities are highly coupled and therefore the control and calibration of them are rather complicated. It is well known that many industrial applications do not require the six degrees of freedom of a parallel manipulator. Thus in order to simplify the kinematics, mechanical assembly and control of parallel manipulators, an interesting trend is the development of the so called defective parallel manipulators, in other words, spatial parallel manipulators with fewer than six degrees of freedom. Special mention deserves the Delta robot, invented by Clavel (1991); which proved that parallel robotic manipulators are an excellent option for industrial applications where the accuracy and stiffness are fundamental characteristics. Consider for instance that the Adept Quattro robot, an application of the Delta robot, developed by Francois Pierrot in collaboration with Fatronik (Int. patent appl. WO/2006/087399), has a", "title": "" } ]
scidocsrr
dcd2dd029398250c200f85104d03a989
A Deep Feature based Multi-kernel Learning Approach for Video Emotion Recognition
[ { "docid": "3f88da8f70976c11bf5bab5f1d438d58", "text": "The task of the Emotion Recognition in the Wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based “bag-of-mouths” model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67 % on the 2014 dataset.", "title": "" } ]
[ { "docid": "d5019a5536950482e166d68dc3a7cac7", "text": "Co-contamination of the environment with toxic chlorinated organic and heavy metal pollutants is one of the major problems facing industrialized nations today. Heavy metals may inhibit biodegradation of chlorinated organics by interacting with enzymes directly involved in biodegradation or those involved in general metabolism. Predictions of metal toxicity effects on organic pollutant biodegradation in co-contaminated soil and water environments is difficult since heavy metals may be present in a variety of chemical and physical forms. Recent advances in bioremediation of co-contaminated environments have focussed on the use of metal-resistant bacteria (cell and gene bioaugmentation), treatment amendments, clay minerals and chelating agents to reduce bioavailable heavy metal concentrations. Phytoremediation has also shown promise as an emerging alternative clean-up technology for co-contaminated environments. However, despite various investigations, in both aerobic and anaerobic systems, demonstrating that metal toxicity hampers the biodegradation of the organic component, a paucity of information exists in this area of research. Therefore, in this review, we discuss the problems associated with the degradation of chlorinated organics in co-contaminated environments, owing to metal toxicity and shed light on possible improvement strategies for effective bioremediation of sites co-contaminated with chlorinated organic compounds and heavy metals.", "title": "" }, { "docid": "5e9cc7e7933f85b6cffe103c074105d4", "text": "Substrate-integrated waveguides (SIWs) maintain the advantages of planar circuits (low loss, low profile, easy manufacturing, and integration in a planar circuit board) and improve the quality factor of filter resonators. Empty substrate-integrated waveguides (ESIWs) substantially reduce the insertion losses, because waves propagate through air instead of a lossy dielectric. The first ESIW used a simple tapering transition that cannot be used for thin substrates. A new transition has recently been proposed, which includes a taper also in the microstrip line, not only inside the ESIW, and so it can be used for all substrates, although measured return losses are only 13 dB. In this letter, the cited transition is improved by placing via holes that prevent undesired radiation, as well as two holes that help to ensure good accuracy in the mechanization of the input iris, thus allowing very good return losses (over 20 dB) in the measured results. A design procedure that allows the successful design of the proposed new transition is also provided. A back-to-back configuration of the improved new transition has been successfully manufactured and measured.", "title": "" }, { "docid": "9592fc0ec54a5216562478414dc68eb4", "text": "We consider the problem of finding the best arm in a stochastic multi-armed bandit game. The regret of a forecaster is here defined by the gap between the mean reward of the optimal arm and the mean reward of the ultimately chosen arm. We propose a highly exploring UCB policy and a new algorithm based on successive rejects. We show that these algorithms are essentially optimal since their regret decreases exponentially at a rate which is, up to a logarithmic factor, the best possible. However, while the UCB policy needs the tuning of a parameter depending on the unobservable hardness of the task, the successive rejects policy benefits from being parameter-free, and also independent of the scaling of the rewards. As a by-product of our analysis, we show that identifying the best arm (when it is unique) requires a number of samples of order (up to a log(K) factor) ∑ i 1/∆ 2 i , where the sum is on the suboptimal arms and ∆i represents the difference between the mean reward of the best arm and the one of arm i. This generalizes the well-known fact that one needs of order of 1/∆ samples to differentiate the means of two distributions with gap ∆.", "title": "" }, { "docid": "7fb9cb7cb777d7f245b2444cd2cd4f9d", "text": "Several recent studies have introduced lightweight versions of Java: reduced languages in which complex features like threads and reflection are dropped to enable rigorous arguments about key properties such as type safety. We carry this process a step further, omitting almost all features of the full language (including interfaces and even assignment) to obtain a small calculus, Featherweight Java, for which rigorous proofs are not only possible but easy. Featherweight Java bears a similar relation to Java as the lambda-calculus does to languages such as ML and Haskell. It offers a similar computational \"feel,\" providing classes, methods, fields, inheritance, and dynamic typecasts with a semantics closely following Java's. A proof of type safety for Featherweight Java thus illustrates many of the interesting features of a safety proof for the full language, while remaining pleasingly compact. The minimal syntax, typing rules, and operational semantics of Featherweight Java make it a handy tool for studying the consequences of extensions and variations. As an illustration of its utility in this regard, we extend Featherweight Java with generic classes in the style of GJ (Bracha, Odersky, Stoutamire, and Wadler) and give a detailed proof of type safety. The extended system formalizes for the first time some of the key features of GJ.", "title": "" }, { "docid": "db5dcaddaa38f472afaa84b61e4ea650", "text": "The dynamics of load, especially induction motors, are the driving force for short-term voltage stability (STVS) problems. In this paper, the equivalent rotation speed of motors is identified online and its recovery time is estimated next to realize an emergency-demand-response (EDR) based under speed load shedding (USLS) scheme to improve STVS. The proposed scheme consists of an EDR program and two regular stages (RSs). In the EDR program, contracted load is used as a fast-response resource rather than the last defense. The estimated recovery time (ERT) is used as the triggering signal for the EDR program. In the RSs, the amount of load to be shed at each bus is determined according to the assigned weights based on ERTs. Case studies on a practical power system in China Southern Power Grid have validated the performance of the proposed USLS scheme under various contingency scenarios. The utilization of EDR resources and the adaptive distribution of shedding amount in RSs guarantee faster voltage recovery. Therefore, USLS offers a new and more effective approach compared with existing under voltage load shedding to improve STVS.", "title": "" }, { "docid": "3e0a52bc1fdf84279dee74898fcd93bf", "text": "A variety of abnormal imaging findings of the petrous apex are encountered in children. Many petrous apex lesions are identified incidentally while images of the brain or head and neck are being obtained for indications unrelated to the temporal bone. Differential considerations of petrous apex lesions in children include “leave me alone” lesions, infectious or inflammatory lesions, fibro-osseous lesions, neoplasms and neoplasm-like lesions, as well as a few rare miscellaneous conditions. Some lesions are similar to those encountered in adults, and some are unique to children. Langerhans cell histiocytosis (LCH) and primary and metastatic pediatric malignancies such as neuroblastoma, rhabomyosarcoma and Ewing sarcoma are more likely to be encountered in children. Lesions such as petrous apex cholesterol granuloma, cholesteatoma and chondrosarcoma are more common in adults and are rarely a diagnostic consideration in children. We present a comprehensive pictorial review of CT and MRI appearances of pediatric petrous apex lesions.", "title": "" }, { "docid": "0d706058ff906f643d35295075fa4199", "text": "[Purpose] The present study examined the effects of treatment using PNF extension techniques on the pain, pressure pain, and neck and shoulder functions of the upper trapezius muscles of myofascial pain syndrome (MPS) patients. [Subjects] Thirty-two patients with MPS in the upper trapezius muscle were divided into two groups: a PNF group (n=16), and a control group (n=16) [Methods] The PNF group received upper trapezius muscle relaxation therapy and shoulder joint stabilizing exercises. Subjects in the control group received only the general physical therapies for the upper trapezius muscles. Subjects were measured for pain on a visual analog scale (VAS), pressure pain threshold (PPT), the neck disability index (NDI), and the Constant-Murley scale (CMS). [Results] None of the VAS, PPT, and NDI results showed significant differences between the groups, while performing postures, internal rotation, and external rotation among the CMS items showed significant differences between the groups. [Conclusion] Exercise programs that apply PNF techniques can be said to be effective at improving the function of MPS patients.", "title": "" }, { "docid": "1862f864cc1e24346c063ebc8a9e6a59", "text": "We focus on knowledge base construction (KBC) from richly formatted data. In contrast to KBC from text or tabular data, KBC from richly formatted data aims to extract relations conveyed jointly via textual, structural, tabular, and visual expressions. We introduce Fonduer, a machine-learning-based KBC system for richly formatted data. Fonduer presents a new data model that accounts for three challenging characteristics of richly formatted data: (1) prevalent document-level relations, (2) multimodality, and (3) data variety. Fonduer uses a new deep-learning model to automatically capture the representation (i.e., features) needed to learn how to extract relations from richly formatted data. Finally, Fonduer provides a new programming model that enables users to convert domain expertise, based on multiple modalities of information, to meaningful signals of supervision for training a KBC system. Fonduer-based KBC systems are in production for a range of use cases, including at a major online retailer. We compare Fonduer against state-of-the-art KBC approaches in four different domains. We show that Fonduer achieves an average improvement of 41 F1 points on the quality of the output knowledge base---and in some cases produces up to 1.87x the number of correct entries---compared to expert-curated public knowledge bases. We also conduct a user study to assess the usability of Fonduer's new programming model. We show that after using Fonduer for only 30 minutes, non-domain experts are able to design KBC systems that achieve on average 23 F1 points higher quality than traditional machine-learning-based KBC approaches.", "title": "" }, { "docid": "85d31f3940ee258589615661e596211d", "text": "Bulk Synchronous Parallelism (BSP) provides a good model for parallel processing of many large-scale graph applications, however it is unsuitable/inefficient for graph applications that require coordination, such as graph-coloring, subcoloring, and clustering. To address this problem, we present an efficient modification to the BSP model to implement serializability (sequential consistency) without reducing the highlyparallel nature of BSP. Our modification bypasses the message queues in BSP and reads directly from the worker’s memory for the internal vertex executions. To ensure serializability, coordination is performed— implemented via dining philosophers or token ring— only for border vertices partitioned across workers. We implement our modifications to BSP on Giraph, an open-source clone of Google’s Pregel. We show through a graph-coloring application that our modified framework, Giraphx, provides much better performance than implementing the application using dining-philosophers over Giraph. In fact, Giraphx outperforms Giraph even for embarrassingly parallel applications that do not require coordination, e.g., PageRank.", "title": "" }, { "docid": "838bd8a38f9d67d768a34183c72da07d", "text": "Jacobsen syndrome (JS), a rare disorder with multiple dysmorphic features, is caused by the terminal deletion of chromosome 11q. Typical features include mild to moderate psychomotor retardation, trigonocephaly, facial dysmorphism, cardiac defects, and thrombocytopenia, though none of these features are invariably present. The estimated occurrence of JS is about 1/100,000 births. The female/male ratio is 2:1. The patient admitted to our clinic at 3.5 years of age with a cardiac murmur and facial anomalies. Facial anomalies included trigonocephaly with bulging forehead, hypertelorism, telecanthus, downward slanting palpebral fissures, and a carp-shaped mouth. The patient also had strabismus. An echocardiogram demonstrated perimembranous aneurysmatic ventricular septal defect and a secundum atrial defect. The patient was <3rd percentile for height and weight and showed some developmental delay. Magnetic resonance imaging (MRI) showed hyperintensive gliotic signal changes in periventricular cerebral white matter, and leukodystrophy was suspected. Chromosomal analysis of the patient showed terminal deletion of chromosome 11. The karyotype was designated 46, XX, del(11) (q24.1). A review of published reports shows that the severity of the observed clinical abnormalities in patients with JS is not clearly correlated with the extent of the deletion. Most of the patients with JS had short stature, and some of them had documented growth hormone deficiency, or central or primary hypothyroidism. In patients with the classical phenotype, the diagnosis is suspected on the basis of clinical findings: intellectual disability, facial dysmorphic features and thrombocytopenia. The diagnosis must be confirmed by cytogenetic analysis. For patients who survive the neonatal period and infancy, the life expectancy remains unknown. In this report, we describe a patient with the clinical features of JS without thrombocytopenia. To our knowledge, this is the first case reported from Turkey.", "title": "" }, { "docid": "ec4b7c50f3277bb107961c9953fe3fc4", "text": "A blockchain is a linked-list of immutable tamper-proof blocks, which is stored at each participating node. Each block records a set of transactions and the associated metadata. Blockchain transactions act on the identical ledger data stored at each node. Blockchain was first perceived by Satoshi Nakamoto (Satoshi 2008), as a peer-to-peer money exchange system. Nakamoto referred to the transactional tokens exchanged among clients in his system, as Bitcoins. Overview", "title": "" }, { "docid": "68ecfd8434fb7b28e3c5c88effde3c2a", "text": "Enterprise Resource Planning (ERP) systems involve the purchase of pre-written software modules from third party suppliers, rather than bespoke (i.e. specially tailored) production of software requirements, and are often described as a buy rather than build approach to information systems development. Current research has shown that there has been a notable decrease in the satisfaction levels of ERP implementations over the period 1998-2000.\nThe environment in which such software is selected, implemented and used may be viewed as a social activity system, which consists of a variety of stakeholders e.g. users, developers, managers, suppliers and consultants. In such a context, an interpretive research approach (Walsham, 1995) is appropriate in order to understand the influences at work.\nThis paper reports on an interpretive study that attempts to understand the reasons for this apparent lack of success by analyzing issues raised by representatives of key stakeholder groups. Resulting critical success factors are then compared with those found in the literature, most notably those of Bancroft et al (1998).\nConclusions are drawn on a wide range of organizational, management and political issues that relate to the multiplicity of stakeholder perceptions.", "title": "" }, { "docid": "6df55b88150f5d52aa30ab770f464546", "text": "OBJECTIVES\nThe objective of this study has been to review the incidence of biological and technical complications in case of tooth-implant-supported fixed partial denture (FPD) treatments on the basis of survival data regarding clinical cases.\n\n\nMATERIAL AND METHODS\nBased on the treatment documentations of a Bundeswehr dental clinic (Cologne-Wahn German Air Force Garrison), the medical charts of 83 patients with tooth-implant-supported FPDs were completely recorded. The median follow-up time was 4.73 (time range: 2.2-8.3) years. In the process, survival curves according to Kaplan and Meier were applied in addition to frequency counts.\n\n\nRESULTS\nA total of 84 tooth-implant (83 patients) connected prostheses were followed (132 abutment teeth, 142 implant abutments (Branemark, Straumann). FPDs: the time-dependent illustration reveals that after 5 years, as many as 10% of the tooth-implant-supported FPDs already had to be subjected to a technical modification (renewal (n=2), reintegration (n=4), veneer fracture (n=5), fracture of frame (n=2)). In contrast to non-rigid connection of teeth and implants, technical modification measures were rarely required in case of tooth-implant-supported FPDs with a rigid connection. There was no statistical difference between technical complications and the used implant system. Abutment teeth and implants: during the observation period, none of the functionally loaded implants (n=142) had to be removed. Three of the overall 132 abutment teeth were lost because of periodontal inflammation. The time-dependent illustration reveals, that after 5 years as many as 8% of the abutment teeth already required corresponding therapeutic measures (periodontal treatment (5%), filling therapy (2.5%), endodontic treatment (0.5%)). After as few as 3 years, the connection related complications of implant abutments (abutment or occlusal screw loosening, loss of cementation) already had to be corrected in approximately 8% of the cases. In the utilization period there was no screw or abutment fracture.\n\n\nCONCLUSION\nTechnical complications of implant-supported FPDs are dependent on the different bridge configurations. When using rigid functional connections, similarly favourable values will be achieved as in case of solely implant-supported FPDs. In this study other characteristics like different fixation systems (screwed vs. cemented) or various implant systems had no significant effect to the rate of technical complications.", "title": "" }, { "docid": "88d8fe415f3026a45e0aa4b1a8c36c57", "text": "Traffic sign detection plays an important role in a number of practical applications, such as intelligent driver assistance and roadway inventory management. In order to process the large amount of data from either real-time videos or large off-line databases, a high-throughput traffic sign detection system is required. In this paper, we propose an FPGA-based hardware accelerator for traffic sign detection based on cascade classifiers. To maximize the throughput and power efficiency, we propose several novel ideas, including: 1) rearranged numerical operations; 2) shared image storage; 3) adaptive workload distribution; and 4) fast image block integration. The proposed design is evaluated on a Xilinx ZC706 board. When processing high-definition (1080p) video, it achieves the throughput of 126 frames/s and the energy efficiency of 0.041 J/frame.", "title": "" }, { "docid": "47eef1318d313e2f89bb700f8cd34472", "text": "This paper sets out to detect controversial news reports using online discussions as a source of information. We define controversy as a public discussion that divides society and demonstrate that a content and stylometric analysis of these debates yields useful signals for extracting disputed news items. Moreover, we argue that a debate-based approach could produce more generic models, since the discussion architectures we exploit to measure controversy occur on many different platforms.", "title": "" }, { "docid": "ed22fe0d13d4450005abe653f41df2c0", "text": "Polycystic ovary syndrome (PCOS) is a complex endocrine disorder affecting 5-10 % of women of reproductive age. It generally manifests with oligo/anovulatory cycles, hirsutism and polycystic ovaries, together with a considerable prevalence of insulin resistance. Although the aetiology of the syndrome is not completely understood yet, PCOS is considered a multifactorial disorder with various genetic, endocrine and environmental abnormalities. Moreover, PCOS patients have a higher risk of metabolic and cardiovascular diseases and their related morbidity, if compared to the general population.", "title": "" }, { "docid": "de7b16961bb4aa2001a3d0859f68e4c6", "text": "A new practical method is given for the self-calibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowledge of the orientations of the camera. Calibration is based on the image correspondences only. This method differs fundamentally from previous results by Maybank and Faugeras on selfcalibration using the epipolar structure of image pairs. In the method of this paper, there is no epipolar structure since all images are taken from the same point in space. Since the images are all taken from the same point in space, determination of point matches is considerably easier than for images taken with a moving camera, since problems of occlusion or change of aspect or illumination do not occur. The calibration method is evaluated on several sets of synthetic and real image data.", "title": "" }, { "docid": "6956dadf7462db200559b5c51a09c481", "text": "W propose that the temporal dimension is fragile in that choices are insufficiently sensitive to it, and second, such sensitivity as exists is exceptionally malleable, unlike other dimensions such as money, which are attended by default. To test this, we axiomatize a “constant-sensitivity” discount function, and in four studies, we show that the degree of time-sensitivity is inadequate relative to the compound discounting norm, and strongly susceptible to manipulation. Time-sensitivity is increased by a comparative within-subject presentation (Experiment 1), direct instruction (Experiment 3), and provision of a visual cue for time duration (Experiment 4); time-sensitivity is decreased using a time pressure manipulation (Experiment 2). In each study, the sensitivity manipulation has an opposite effect on near-future and far-future valuations: Increased sensitivity decreases discounting in the near future and increases discounting in the far future. In contrast, such sensitivity manipulations have little effect on the money dimension.", "title": "" }, { "docid": "a031f8352b511987e95f7d9127b44436", "text": "The environmental robustness of DNN-based acoustic models can be significantly improved by using multi-condition training data. However, as data collection is a costly proposition, simulation of the desired conditions is a frequently adopted strategy. In this paper we detail a data augmentation approach for far-field ASR. We examine the impact of using simulated room impulse responses (RIRs), as real RIRs can be difficult to acquire, and also the effect of adding point-source noises. We find that the performance gap between using simulated and real RIRs can be eliminated when point-source noises are added. Further we show that the trained acoustic models not only perform well in the distant-talking scenario but also provide better results in the close-talking scenario. We evaluate our approach on several LVCSR tasks which can adequately represent both scenarios.", "title": "" }, { "docid": "f492f0121eba327778151a462e32e7b4", "text": "We describe the instructional software JFLAP 4.0 and how it can be used to provide a hands-on formal languages and automata theory course. JFLAP 4.0 doubles the number of chapters worth of material from JFLAP 3.1, now covering topics from eleven of thirteen chapters for a semester course. JFLAP 4.0 has easier interactive approaches to previous topics and covers many new topics including three parsing algorithms, multi-tape Turing machines, L-systems, and grammar transformations.", "title": "" } ]
scidocsrr
fdc696b24e0e5e14853186cd23f84f10
Hybrid Recommender Systems: A Systematic Literature Review
[ { "docid": "e870f2fe9a26b241bdeca882b6186169", "text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.", "title": "" } ]
[ { "docid": "8c308305b4a04934126c4746c8333b52", "text": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.", "title": "" }, { "docid": "8f025fda5bbf9468dc65c16539d0aa0d", "text": "Image compression is one of the key image processing techniques in signal processing and communication systems. Compression of images leads to reduction of storage space and reduces transmission bandwidth and hence also the cost. Advances in VLSI technology are rapidly changing the technological needs of common man. One of the major technological domains that are directly related to mankind is image compression. Neural networks can be used for image compression. Neural network architectures have proven to be more reliable, robust, and programmable and offer better performance when compared with classical techniques. In this work the main focus is on development of new architectures for hardware implementation of 3-D neural network based image compression optimizing area, power and speed as specific to ASIC implementation, and comparison with FPGA.", "title": "" }, { "docid": "f3345e524ff05bcd6c8a13bbb5e2aa6d", "text": "Permission-induced attacks, i.e., security breaches enabled by permission misuse, are among the most critical and frequent issues threatening the security of Android devices. By ignoring the temporal aspects of an attack during the analysis and enforcement, the state-of-the-art approaches aimed at protecting the users against such attacks are prone to have low-coverage in detection and high-disruption in prevention of permission-induced attacks. To address this shortcomings, we present Terminator, a temporal permission analysis and enforcement framework for Android. Leveraging temporal logic model checking,Terminator's analyzer identifies permission-induced threats with respect to dynamic permission states of the apps. At runtime, Terminator's enforcer selectively leases (i.e., temporarily grants) permissions to apps when the system is in a safe state, and revokes the permissions when the system moves to an unsafe state realizing the identified threats. The results of our experiments, conducted over thousands of apps, indicate that Terminator is able to provide an effective, yet non-disruptive defense against permission-induced attacks. We also show that our approach, which does not require modification to the Android framework or apps' implementation logic, is highly reliable and widely applicable.", "title": "" }, { "docid": "6087e066b04b9c3ac874f3c58979f89a", "text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.", "title": "" }, { "docid": "5e51b4363a156f4c3fde12da345e9438", "text": "In this work we present an annotation framework to capture causality between events, inspired by TimeML, and a language resource covering both temporal and causal relations. This data set is then used to build an automatic extraction system for causal signals and causal links between given event pairs. The evaluation and analysis of the system’s performance provides an insight into explicit causality in text and the connection between temporal and causal relations.", "title": "" }, { "docid": "57ffea840501c5e9a77a2c7e0d609d07", "text": "Datasets power computer vison research and drive breakthroughs. Larger and larger datasets are needed to better utilize the exponentially increasing computing power. However, datasets generation is both time consuming and expensive as human beings are required for image labelling. Human labelling cannot scale well. How can we generate larger image datasets easier and faster? In this paper, we provide a new approach for large scale datasets generation. We generate images from 3D object models directly. The large volume of freely available 3D CAD models and mature computer graphics techniques make generating large scale image datasets from 3D models very efficient. As little human effort involved in this process, it can scale very well. Rather than releasing a static dataset, we will also provide a software library for dataset generation so that the computer vision community can easily extend or modify the datasets accordingly.", "title": "" }, { "docid": "bd8ae67f959a7b840eff7e8c400a41e0", "text": "Enabling a humanoid robot to drive a car, requires the development of a set of basic primitive actions. These include: walking to the vehicle, manually controlling its commands (e.g., ignition, gas pedal and steering), and moving with the whole-body, to ingress/egress the car. In this paper, we present a sensorbased reactive framework for realizing the central part of the complete task, consisting in driving the car along unknown roads. The proposed framework provides three driving strategies by which a human supervisor can teleoperate the car, ask for assistive driving, or give the robot full control of the car. A visual servoing scheme uses features of the road image to provide the reference angle for the steering wheel to drive the car at the center of the road. Simultaneously, a Kalman filter merges optical flow and accelerometer measurements, to estimate the car linear velocity and correspondingly compute the gas pedal command for driving at a desired speed. The steering wheel and gas pedal reference are sent to the robot control to achieve the driving task with the humanoid. We present results from a driving experience with a real car and the humanoid robot HRP-2Kai. Part of the framework has been used to perform the driving task at the DARPA Robotics Challenge.", "title": "" }, { "docid": "0e2b885774f69342ade2b9ad1bc84835", "text": "History repeatedly demonstrates that rural communities have unique technological needs. Yet, we know little about how rural communities use modern technologies, so we lack knowledge on how to design for them. To address this gap, our empirical paper investigates behavioral differences between more than 3,000 rural and urban social media users. Using a dataset collected from a broadly popular social network site, we analyze users' profiles, 340,000 online friendships and 200,000 interpersonal messages. Using social capital theory, we predict differences between rural and urban users and find strong evidence supporting our hypotheses. Namely, rural people articulate far fewer friends online, and those friends live much closer to home. Our results also indicate that the groups have substantially different gender distributions and use privacy features differently. We conclude by discussing design implications drawn from our findings; most importantly, designers should reconsider the binary friend-or-not model to allow for incremental trust-building.", "title": "" }, { "docid": "94f1de78a229dc542a67ea564a0b259f", "text": "Voice enabled personal assistants like Microsoft Cortana are becoming better every day. As a result more users are relying on such software to accomplish more tasks. While these applications are significantly improving due to great advancements in the underlying technologies, there are still shortcomings in their performance resulting in a class of user queries that such assistants cannot yet handle with satisfactory results. We analyze the data from millions of user queries, and build a machine learning system capable of classifying user queries into two classes; a class of queries that are addressable by Cortana with high user satisfaction, and a class of queries that are not. We then use unsupervised learning to cluster similar queries and assign them to human assistants who can complement Cortana functionality.", "title": "" }, { "docid": "ff5fb2a555c9bcdfad666406b94ebc71", "text": "Driven by profits, spam reviews for product promotion or suppression become increasingly rampant in online shopping platforms. This paper focuses on detecting hidden spam users based on product reviews. In the literature, there have been tremendous studies suggesting diversified methods for spammer detection, but whether these methods can be combined effectively for higher performance remains unclear. Along this line, a hybrid PU-learning-based Spammer Detection (hPSD) model is proposed in this paper. On one hand, hPSD can detect multi-type spammers by injecting or recognizing only a small portion of positive samples, which meets particularly real-world application scenarios. More importantly, hPSD can leverage both user features and user relations to build a spammer classifier via a semi-supervised hybrid learning framework. Experimental results on movie data sets with shilling injection show that hPSD outperforms several state-of-the-art baseline methods. In particular, hPSD shows great potential in detecting hidden spammers as well as their underlying employers from a real-life Amazon data set. These demonstrate the effectiveness and practical value of hPSD for real-life applications.", "title": "" }, { "docid": "128de222f033bc2c50b5af44db8f6f6f", "text": "Copyright & reuse City University London has developed City Research Online so that its users may access the research outputs of City University London's staff. Copyright © and Moral Rights for this paper are retained by the individual author(s) and/ or other copyright holders. All material in City Research Online is checked for eligibility for copyright before being made available in the live archive. URLs from City Research Online may be freely distributed and linked to from other web pages.", "title": "" }, { "docid": "bf156a97587b55e8afe255fe1b1a8ac0", "text": "In recent years researches are focused towards mining infrequent patterns rather than frequent patterns. Mining infrequent pattern plays vital role in detecting any abnormal event. In this paper, an algorithm named Infrequent Pattern Miner for Data Streams (IPM-DS) is proposed for mining nonzero infrequent patterns from data streams. The proposed algorithm adopts the FP-growth based approach for generating all infrequent patterns. The proposed algorithm (IPM-DS) is evaluated using health data set collected from wearable physiological sensors that measure vital parameters such as Heart Rate (HR), Breathing Rate (BR), Oxygen Saturation (SPO2) and Blood pressure (BP) and also with two publically available data sets such as e-coli and Wine from UCI repository. The experimental results show that the proposed algorithm generates all possible infrequent patterns in less time.", "title": "" }, { "docid": "1657df28bba01b18fb26bb8c823ad4b4", "text": "Come with us to read a new book that is coming recently. Yeah, this is a new coming book that many people really want to read will you be one of them? Of course, you should be. It will not make you feel so hard to enjoy your life. Even some people think that reading is a hard to do, you must be sure that you can do it. Hard will be felt when you have no ideas about what kind of book to read. Or sometimes, your reading material is not interesting enough.", "title": "" }, { "docid": "2117e3c0cf7854c8878417b7d84491ce", "text": "We designed a new annotation scheme for formalising relation structures in research papers, through the investigation of computer science papers. The annotation scheme is based on the hypothesis that identifying the role of entities and events that are described in a paper is useful for intelligent information retrieval in academic literature, and the role can be determined by the relationship between the author and the described entities or events, and relationships among them. Using the scheme, we have annotated research abstracts from the IPSJ Journal published in Japanese by the Information Processing Society of Japan. On the basis of the annotated corpus, we have developed a prototype information extraction system which has the facility to classify sentences according to the relationship between entities mentioned, to help find the role of the entity in which the searcher is interested.", "title": "" }, { "docid": "43b0358c4d3fec1dd58600847bf0c1b8", "text": "The transformative promises and potential of Big and Open Data are substantial for e-government services, openness and transparency, governments, and the interaction between governments, citizens, and the business sector. From “smart” government to transformational government, Big and Open Data can foster collaboration; create real-time solutions to challenges in agriculture, health, transportation, and more; promote greater openness; and usher in a new era of policyand decision-making. There are, however, a range of policy challenges to address regarding Big and Open Data, including access and dissemination; digital asset management, archiving and preservation; privacy; and security. After presenting a discussion of the open data policies that serve as a foundation for Big Data initiatives, this paper examines the ways in which the current information policy framework fails to address a number of these policy challenges. It then offers recommendations intended to serve as a beginning point for a revised policy framework to address significant issues raised by the U.S. government’s engagement in Big Data efforts.", "title": "" }, { "docid": "db5ff75a7966ec6c1503764d7e510108", "text": "Qualitative content analysis as described in published literature shows conflicting opinions and unsolved issues regarding meaning and use of concepts, procedures and interpretation. This paper provides an overview of important concepts (manifest and latent content, unit of analysis, meaning unit, condensation, abstraction, content area, code, category and theme) related to qualitative content analysis; illustrates the use of concepts related to the research procedure; and proposes measures to achieve trustworthiness (credibility, dependability and transferability) throughout the steps of the research procedure. Interpretation in qualitative content analysis is discussed in light of Watzlawick et al.'s [Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies and Paradoxes. W.W. Norton & Company, New York, London] theory of communication.", "title": "" }, { "docid": "39007be7d6b2f296e8dff368d49ac0fe", "text": "Neural oscillations at low- and high-frequency ranges are a fundamental feature of large-scale networks. Recent evidence has indicated that schizophrenia is associated with abnormal amplitude and synchrony of oscillatory activity, in particular, at high (beta/gamma) frequencies. These abnormalities are observed during task-related and spontaneous neuronal activity which may be important for understanding the pathophysiology of the syndrome. In this paper, we shall review the current evidence for impaired beta/gamma-band oscillations and their involvement in cognitive functions and certain symptoms of the disorder. In the first part, we will provide an update on neural oscillations during normal brain functions and discuss underlying mechanisms. This will be followed by a review of studies that have examined high-frequency oscillatory activity in schizophrenia and discuss evidence that relates abnormalities of oscillatory activity to disturbed excitatory/inhibitory (E/I) balance. Finally, we shall identify critical issues for future research in this area.", "title": "" }, { "docid": "9270af032d1adbf9829e7d723ff76849", "text": "To detect illegal copies of copyrighted images, recent copy detection methods mostly rely on the bag-of-visual-words (BOW) model, in which local features are quantized into visual words for image matching. However, both the limited discriminability of local features and the BOW quantization errors will lead to many false local matches, which make it hard to distinguish similar images from copies. Geometric consistency verification is a popular technology for reducing the false matches, but it neglects global context information of local features and thus cannot solve this problem well. To address this problem, this paper proposes a global context verification scheme to filter false matches for copy detection. More specifically, after obtaining initial scale invariant feature transform (SIFT) matches between images based on the BOW quantization, the overlapping region-based global context descriptor (OR-GCD) is proposed for the verification of these matches to filter false matches. The OR-GCD not only encodes relatively rich global context information of SIFT features but also has good robustness and efficiency. Thus, it allows an effective and efficient verification. Furthermore, a fast image similarity measurement based on random verification is proposed to efficiently implement copy detection. In addition, we also extend the proposed method for partial-duplicate image detection. Extensive experiments demonstrate that our method achieves higher accuracy than the state-of-the-art methods, and has comparable efficiency to the baseline method based on the BOW quantization.", "title": "" }, { "docid": "b9c40aa4c8ac9d4b6cbfb2411c542998", "text": "This review will summarize molecular and genetic analyses aimed at identifying the mechanisms underlying the sequence of events during plant zygotic embryogenesis. These events are being studied in parallel with the histological and morphological analyses of somatic embryogenesis. The strength and limitations of somatic embryogenesis as a model system will be discussed briefly. The formation of the zygotic embryo has been described in some detail, but the molecular mechanisms controlling the differentiation of the various cell types are not understood. In recent years plant molecular and genetic studies have led to the identification and characterization of genes controlling the establishment of polarity, tissue differentiation and elaboration of patterns during embryo development. An investigation of the developmental basis of a number of mutant phenotypes has enabled the identification of gene activities promoting (1) asymmetric cell division and polarization leading to heterogeneous partitioning of the cytoplasmic determinants necessary for the initiation of embryogenesis (e.g. GNOM), (2) the determination of the apical-basal organization which is established independently of the differentiation of the tissues of the radial pattern elements (e.g. KNOLLE, FACKEL, ZWILLE), (3) the differentiation of meristems (e.g. SHOOT-MERISTEMLESS), and (4) the formation of a mature embryo characterized by the accumulation of LEA and storage proteins. The accumulation of these two types of proteins is controlled by ABA-dependent regulatory mechanisms as shown using both ABA-deficient and ABA-insensitive mutants (e.g. ABA, ABI3). Both types of embryogenesis have been studied by different techniques and common features have been identified between them. In spite of the relative difficulty of identifying the original cells involved in the developmental processes of somatic embryogenesis, common regulatory mechanisms are probably involved in the first stages up to the globular form. Signal molecules, such as growth regulators, have been shown to play a role during development of both types of embryos. The most promising method for identifying regulatory mechanisms responsible for the key events of embryogenesis will come from molecular and genetic analyses. The mutations already identified will shed light on the nature of the genes that affect developmental processes as well as elucidating the role of the various regulatory genes that control plant embryogenesis.", "title": "" }, { "docid": "2130cc3df3443c912d9a38f83a51ab14", "text": "Event cameras, such as dynamic vision sensors (DVS), and dynamic and activepixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events. The APS stream is a sequence of standard grayscale global-shutter image sensor frames. The DVS events represent brightness changes occurring at a particular moment, with a jitter of about a millisecond under most lighting conditions. They have a dynamic range of >120 dB and effective frame rates >1 kHz at data rates comparable to 30 fps (frames/second) image sensors. To overcome some of the limitations of current image acquisition technology, we investigate in this work the use of the combined DVS and APS streams in endto-end driving applications. The dataset DDD17 accompanying this paper is the first open dataset of annotated DAVIS driving recordings. DDD17 has over 12 h of a 346x260 pixel DAVIS sensor recording highway and city driving in daytime, evening, night, dry and wet weather conditions, along with vehicle speed, GPS position, driver steering, throttle, and brake captured from the car’s on-board diagnostics interface. As an example application, we performed a preliminary end-toend learning study of using a convolutional neural network that is trained to predict the instantaneous steering angle from DVS and APS visual data.", "title": "" } ]
scidocsrr
520e792bfe78bc19c583ea1afb994d99
Avoidance of Information Technology Threats: A Theoretical Perspective
[ { "docid": "c90eae76dbde16de8d52170c2715bd7a", "text": "Several literatures converge on the idea that approach and avoidance/withdrawal behaviors are managed by two partially distinct self-regulatory system. The functions of these systems also appear to be embodied in discrepancyreducing and -enlarging feedback loops, respectively. This article describes how the feedback construct has been used to address these two classes of action and the affective experiences that relate to them. Further discussion centers on the development of measures of individual differences in approach and avoidance tendencies, and how these measures can be (and have been) used as research tools, to investigate whether other phenomena have their roots in approach or avoidance.", "title": "" } ]
[ { "docid": "bc58f2f9f6f5773f5f8b2696d9902281", "text": "Software development is a complicated process and requires careful planning to produce high quality software. In large software development projects, release planning may involve a lot of unique challenges. Due to time, budget and some other constraints, potentially there are many problems that may possibly occur. Subsequently, project managers have been trying to identify and understand release planning, challenges and possible resolutions which might help them in developing more effective and successful software products. This paper presents the findings from an empirical study which investigates release planning challenges. It takes a qualitative approach using interviews and observations with practitioners and project managers at five large software banking projects in Informatics Services Corporation (ISC) in Iran. The main objective of this study is to explore and increase the understanding of software release planning challenges in several software companies in a developing country. A number of challenges were elaborated and discussed in this study within the domain of software banking projects. These major challenges are classified into two main categories: the human-originated including people cooperation, disciplines and abilities; and the system-oriented including systematic approaches, resource constraints, complexity, and interdependency among the systems.", "title": "" }, { "docid": "11c397d0158350bccf741e34c1731a6c", "text": "The purpose of this study is to evaluate the impact of brand awareness to repurchase intention of customers with trilogy of emotions approach. The study population consisted if all the people in Yazd. As the research sample, 384 people who went to cell phone shopping centers in Yazd province responded to the questionnaire. Cronbach's alpha was used to determine the reliability of the questionnaire, and its values was 0.87. To examine the effects of brand awareness on purchase intention, structural equation modeling and AMOUS and SPSS softwares were used. The results of this study show that consumers cognition does not affect the purchase intention, but the customers’ conation and affection affect the re-purchase intention. In addition, brand awareness affects emotions (cognition, affection, and conation) and consumer purchase intention.", "title": "" }, { "docid": "7bbb9fed03444841fb66ec7f3820b9cb", "text": "In this paper, novel n- and p-type tunnel field-effect transistors (T-FETs) based on heterostructure Si/intrinsic-SiGe channel layer are proposed, which exhibit very small subthreshold swings, as well as low threshold voltages. The design parameters for improvement of the characteristics of the devices are studied and optimized based on the theoretical principles and simulation results. The proposed devices are designed to have extremely low off currents on the order of 1 fA/mum and engineered to exhibit substantially higher on currents compared with previously reported T-FET devices. Subthreshold swings as low as 15 mV/dec and threshold voltages as low as 0.13 V are achieved in these devices. Moreover, the T-FETs are designed to exhibit input and output characteristics compatible with CMOS-type digital-circuit applications. Using the proposed n- and p-type devices, the implementation of an inverter circuit based on T-FETs is reported. The performance of the T-FET-based inverter is compared with the 65-nm low-power CMOS-based inverter, and a gain of ~104 is achieved in static power consumption for the T-FET-based inverter with smaller gate delay.", "title": "" }, { "docid": "570855b9d7559c3f4963d1f4d7e28002", "text": "Along with the emergence and popularity of social communications on the Internet, topic discovery from short texts becomes fundamental to many applications that require semantic understanding of textual content. As a rising research field, short text topic modeling presents a new and complementary algorithmic methodology to supplement regular text topic modeling, especially targets to limited word co-occurrence information in short texts. This paper presents the first comprehensive open-source package, called STTM, for use in Java that integrates the state-of-theart models of short text topic modeling algorithms, benchmark datasets, and abundant functions for model inference and evaluation. The package is designed to facilitate the expansion of new methods in this research field and make evaluations between the new approaches and existing ones accessible. STTM is open-sourced at https://github.com/qiang2100/STTM.he", "title": "" }, { "docid": "82a4bac1745e2d5dd9e39c5a4bf5b3e9", "text": "Meaning can be as important as usability in the design of technology.", "title": "" }, { "docid": "34c343413fc748c1fc5e07fb40e3e97d", "text": "We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.", "title": "" }, { "docid": "2053b95170b60fe9f79c107e6ce7e7b3", "text": "The treatment of inflammatory bowel disease (IBD) possesses numerous difficulties owing to the unclear etiology of the disease. This article overviews the drugs used in the treatment of IBD depending on the intensity of clinical symptoms (Canine Inflammatory Bowel Disease Activity Index and Canine Chronic Enterophaty Clinical Activity Index). Patients demonstrating mild symptoms of the disease are usually placed on an appropriate diet which may be combined with immunomodulative or probiotic treatment. In moderate progression of IBD, 5-aminosalicylic acid (mesalazine or olsalazine) derivatives may be administered. Patients showing severe symptoms of the disease are usually treated with immunosuppressive drugs, antibiotics and elimination diet. Since the immune system plays an important role in the pathogenesis of the disease, the advancements in biological therapy research will contribute to the progress in the treatment of canine and feline IBD in the coming years.", "title": "" }, { "docid": "ae2da83aaab6c272cdd6f2847e0801be", "text": "In this work, we propose CyberKrisi, a machine learning based framework for cyber physical farming. IT based farming is very young and emerging with numerous IoT devices such as wireless sensors, surveillance cameras, drones and weather stations. These devices produce large amounts of data about crop, soil, fertilization, irrigation as well as environment. We exploit this data to assess crop performance and compute crop forecasts. We envision an IoT gateway and machine learning gateway in the vicinity of farm land which performs predictions and recommendations as well as relays this data to cloud. Our contribution are twofold: first, we show an application framework for farmers to provide an interface in understanding Farm data. Second, we built a prototype to provide illiterate Farmers an interactive experience with Farm land.", "title": "" }, { "docid": "7e78dd27dd2d4da997ceef7e867b7cd2", "text": "Extracting facial feature is a key step in facial expression recognition (FER). Inaccurate feature extraction very often results in erroneous categorizing of facial expressions. Especially in robotic application, environmental factors such as illumination variation may cause FER system to extract feature inaccurately. In this paper, we propose a robust facial feature point extraction method to recognize facial expression in various lighting conditions. Before extracting facial features, a face is localized and segmented from a digitized image frame. Face preprocessing stage consists of face normalization and feature region localization steps to extract facial features efficiently. As regions of interest corresponding to relevant features are determined, Gabor jets are applied based on Gabor wavelet transformation to extract the facial points. Gabor jets are more invariable and reliable than gray-level values, which suffer from ambiguity as well as illumination variation while representing local features. Each feature point can be matched by a phase-sensitivity similarity function in the relevant regions of interest. Finally, the feature values are evaluated from the geometric displacement of facial points. After tested using the AR face database and the database built in our lab, average facial expression recognition rates of 84.1% and 81.3% are obtained respectively.", "title": "" }, { "docid": "c74a62bb92cb24faf0906c69644c7a53", "text": "For many years psychoanalytic and psychodynamic therapies have been considered to lack a credible evidence-base and have consistently failed to appear in lists of ‘empirically supported treatments’. This study systematically reviews the research evaluating the efficacy and effectiveness of psychodynamic psychotherapy for children and young people. The researchers identified 34 separate studies that met criteria for inclusion, including nine randomised controlled trials. While many of the studies reported are limited by sample size and lack of control groups, the review indicates that there is increasing evidence to suggest the effectiveness of psychoanalytic psychotherapy for children and adolescents. The article aims to provide as complete a picture as possible of the existing evidence base, thereby enabling more refined questions to be asked regarding the nature of the current evidence and gaps requiring further exploration.", "title": "" }, { "docid": "eed0e1b0dd4c97143a8343137e8aa53a", "text": "Recently, most large cloud providers, like Amazon and Microsoft, replicate their Virtual Machine Images (VMIs) on multiple geographically distributed data centers to offer fast service provisioning. Provisioning a service may require to transfer a VMI over the wide-area network (WAN) and therefore is dictated by the distribution of VMIs and the network bandwidth in-between sites. Nevertheless, existing methods to facilitate VMI management (i.e., retrieving VMIs) overlook network heterogeneity in geo-distributed clouds. In this paper, we design, implement and evaluate Nitro, a novel VMI management system that helps to minimize the transfer time of VMIs over a heterogeneous WAN. To achieve this goal, Nitro incorporates two complementary features. First, it makes use of deduplication to reduce the amount of data which will be transferred due to the high similarities within an image and in-between images. Second, Nitro is equipped with a network-aware data transfer strategy to effectively exploit links with high bandwidth when acquiring data and thus expedites the provisioning time. Experimental results show that our network-aware data transfer strategy offers the optimal solution when acquiring VMIs while introducing minimal overhead. Moreover, Nitro outperforms state-of-the-art VMI storage systems (e.g., OpenStack Swift) by up to 77%.", "title": "" }, { "docid": "72e9e772ede3d757122997d525d0f79c", "text": "Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. In this paper, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.", "title": "" }, { "docid": "6bcfc93a3bee13d2c5416e4cc5663646", "text": "The choice of an adequate object shape representation is critical for efficient grasping and robot manipulation. A good representation has to account for two requirements: it should allow uncertain sensory fusion in a probabilistic way and it should serve as a basis for efficient grasp and motion generation. We consider Gaussian process implicit surface potentials as object shape representations. Sensory observations condition the Gaussian process such that its posterior mean defines an implicit surface which becomes an estimate of the object shape. Uncertain visual, haptic and laser data can equally be fused in the same Gaussian process shape estimate. The resulting implicit surface potential can then be used directly as a basis for a reach and grasp controller, serving as an attractor for the grasp end-effectors and steering the orientation of contact points. Our proposed controller results in a smooth reach and grasp trajectory without strict separation of phases. We validate the shape estimation using Gaussian processes in a simulation on randomly sampled shapes and the grasp controller on a real robot with 7DoF arm and 7DoF hand.", "title": "" }, { "docid": "f7ff118b8f39fa0843c4861306b4910f", "text": "This article proposes a novel character-aware neural machine translation (NMT) model that views the input sequences as sequences of characters rather than words. On the use of row convolution (Amodei et al., 2015), the encoder of the proposed model composes word-level information from the input sequences of characters automatically. Since our model doesn’t rely on the boundaries between each word (as the whitespace boundaries in English), it is also applied to languages without explicit word segmentations (like Chinese). Experimental results on Chinese-English translation tasks show that the proposed character-aware NMT model can achieve comparable translation performance with the traditional word based NMT models. Despite the target side is still word based, the proposed model is able to generate much less unknown words.", "title": "" }, { "docid": "082630a33c0cc0de0e60a549fc57d8e8", "text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.", "title": "" }, { "docid": "b39ce00b531dcbf417d0b78c8b9bf1cd", "text": "With the transition of facial expression recognition (FER) from laboratory-controlled to challenging in-the-wild conditions and the recent success of deep learning techniques in various fields, deep neural networks have increasingly been leveraged to learn discriminative representations for automatic FER. Recent deep FER systems generally focus on two important issues: overfitting caused by a lack of sufficient training data and expression-unrelated variations, such as illumination, head pose and identity bias. In this paper, we provide a comprehensive survey on deep FER, including datasets and algorithms that provide insights into these intrinsic problems. First, we introduce the available datasets that are widely used in the literature and provide accepted data selection and evaluation principles for these datasets. We then describe the standard pipeline of a deep FER system with the related background knowledge and suggestions of applicable implementations for each stage. For the state of the art in deep FER, we review existing novel deep neural networks and related training strategies that are designed for FER based on both static images and dynamic image sequences, and discuss their advantages and limitations. Competitive performances on widely used benchmarks are also summarized in this section. We then extend our survey to additional related issues and application scenarios. Finally, we review the remaining challenges and corresponding opportunities in this field as well as future directions for the design of robust deep FER systems.", "title": "" }, { "docid": "dde5155ce92464a8584afc866c324bc2", "text": "Prior work in human trust of autonomous robots suggests the timing of reliability drops impact trust and control allocation strategies. However, trust is traditionally measured post-run, thereby masking the real-time changes in trust, reducing sensitivity to factors like inertia, and subjecting the measure to biases like the primacy-recency effect. Likewise, little is known on how feedback of robot confidence interacts in real-time with trust and control allocation strategies. An experiment to examine these issues showed trust loss due to early reliability drops is masked in traditional post-run measures, trust demonstrates inertia, and feedback alters allocation strategies independent of trust. The implications of specific findings on development of trust models and robot design are also discussed.", "title": "" }, { "docid": "c5f521d5e5e089261914f6784e2d77da", "text": "Generating structured query language (SQL) from natural language is an emerging research topic. This paper presents a new learning paradigm from indirect supervision of the answers to natural language questions, instead of SQL queries. This paradigm facilitates the acquisition of training data due to the abundant resources of question-answer pairs for various domains in the Internet, and expels the difficult SQL annotation job. An endto-end neural model integrating with reinforcement learning is proposed to learn SQL generation policy within the answerdriven learning paradigm. The model is evaluated on datasets of different domains, including movie and academic publication. Experimental results show that our model outperforms the baseline models.", "title": "" }, { "docid": "1a4d133147b7936ee340b572d7ca2dc4", "text": "(WSNs) have made them extremely useful in various applications. WSNs are susceptible to attack, because they are cheap, small devices and are deployed in open and unprotected environments. In this paper, we propose an Intrusion Detection System (IDS) created in Cluster-based Wireless Sensor Networks (CWSNs). According to the capability of Cluster Head (CH) is better than other Sensor Nodes (SNs) in CWSN. Therefore, a Hybrid Intrusion Detection System (HIDS) is designed in this research. The CH is used to detect intruders that not only decreases the consumption of energy, but also efficiently reduces the amount of information in the entire network. However, the lifetime of network can be prolonged by the proposed HIDS.", "title": "" }, { "docid": "0685af4227e1fdae9d49421f17443014", "text": "Massive open online courses (MOOCs) aim to facilitate open-access and massive-participation education. These courses have attracted millions of learners recently. At present, most MOOC platforms record the Web log data of learner interactions with course videos. Such large amounts of multivariate data pose a new challenge in terms of analyzing online learning behaviors. Previous studies have mainly focused on the aggregate behaviors of learners from a summative view; however, few attempts have been made to conduct a detailed analysis of such behaviors. To determine complex learning patterns in MOOC video interactions, this paper introduces a comprehensive visualization system called PeakVizor. This system enables course instructors and education experts to analyze the “peaks” or the video segments that generate numerous clickstreams. The system features three views at different levels: the overview with glyphs to display valuable statistics regarding the peaks detected; the flow view to present spatio-temporal information regarding the peaks; and the correlation view to show the correlation between different learner groups and the peaks. Case studies and interviews conducted with domain experts have demonstrated the usefulness and effectiveness of PeakVizor, and new findings about learning behaviors in MOOC platforms have been reported.", "title": "" } ]
scidocsrr
9514041d98f05f2e6fe6f1cc1686c30c
Zero-Shot Learning on Semantic Class Prototype Graph
[ { "docid": "be9fc2798c145abe70e652b7967c3760", "text": "Given semantic descriptions of object classes, zero-shot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.", "title": "" }, { "docid": "85be4bd00c69fdd43841fa7112df20b1", "text": "The role of semantics in zero-shot learning is considered. The effectiveness of previous approaches is analyzed according to the form of supervision provided. While some learn semantics independently, others only supervise the semantic subspace explained by training classes. Thus, the former is able to constrain the whole space but lacks the ability to model semantic correlations. The latter addresses this issue but leaves part of the semantic space unsupervised. This complementarity is exploited in a new convolutional neural network (CNN) framework, which proposes the use of semantics as constraints for recognition. Although a CNN trained for classification has no transfer ability, this can be encouraged by learning an hidden semantic layer together with a semantic code for classification. Two forms of semantic constraints are then introduced. The first is a loss-based regularizer that introduces a generalization constraint on each semantic predictor. The second is a codeword regularizer that favors semantic-to-class mappings consistent with prior semantic knowledge while allowing these to be learned from data. Significant improvements over the state-of-the-art are achieved on several datasets.", "title": "" } ]
[ { "docid": "ba60234f9b1769ab83f588326e95742e", "text": "Functional languages offer a high level of abstraction, which results in programs that are elegant and easy to understand. Central to the development of functional programming are inductive and coinductive types and associated programming constructs, such as pattern-matching. Whereas inductive types have a long tradition and are well supported in most languages, coinductive types are subject of more recent research and are less mainstream. We present CoCaml, a functional programming language extending OCaml, which allows us to define recursive functions on regular coinductive datatypes. These functions are defined like usual recursive functions, but parameterized by an equation solver. We present a full implementation of all the constructs and solvers and show how these can be used in a variety of examples, including operations on infinite lists, infinitary λ-terms, and p-adic numbers.", "title": "" }, { "docid": "0685c33de763bdedf2a1271198569965", "text": "The use of virtual-reality technology in the areas of rehabilitation and therapy continues to grow, with encouraging results being reported for applications that address human physical, cognitive, and psychological functioning. This article presents a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis for the field of VR rehabilitation and therapy. The SWOT analysis is a commonly employed framework in the business world for analyzing the factors that influence a company's competitive position in the marketplace with an eye to the future. However, the SWOT framework can also be usefully applied outside of the pure business domain. A quick check on the Internet will turn up SWOT analyses for urban-renewal projects, career planning, website design, youth sports programs, and evaluation of academic research centers, and it becomes obvious that it can be usefully applied to assess and guide any organized human endeavor designed to accomplish a mission. It is hoped that this structured examination of the factors relevant to the current and future status of VR rehabilitation will provide a good overview of the key issues and concerns that are relevant for understanding and advancing this vital application area.", "title": "" }, { "docid": "874cecfb3f21f4c145fda262e1eee369", "text": "For many languages that use non-Roman based indigenous scripts (e.g., Arabic, Greek and Indic languages) one can often find a large amount of user generated transliterated content on the Web in the Roman script. Such content creates a monolingual or multi-lingual space with more than one script which we refer to as the Mixed-Script space. IR in the mixed-script space is challenging because queries written in either the native or the Roman script need to be matched to the documents written in both the scripts. Moreover, transliterated content features extensive spelling variations. In this paper, we formally introduce the concept of Mixed-Script IR, and through analysis of the query logs of Bing search engine, estimate the prevalence and thereby establish the importance of this problem. We also give a principled solution to handle the mixed-script term matching and spelling variation where the terms across the scripts are modelled jointly in a deep-learning architecture and can be compared in a low-dimensional abstract space. We present an extensive empirical analysis of the proposed method along with the evaluation results in an ad-hoc retrieval setting of mixed-script IR where the proposed method achieves significantly better results (12% increase in MRR and 29% increase in MAP) compared to other state-of-the-art baselines.", "title": "" }, { "docid": "d94d31377a8dbe487f4fdcbfc0f2beb7", "text": "A core novelty of Alpha Zero is the interleaving of tree search and deep learning, which has proven very successful in board games like Chess, Shogi and Go. These games have a discrete action space. However, many real-world reinforcement learning domains have continuous action spaces, for example in robotic control, navigation and self-driving cars. This paper presents the necessary theoretical extensions of Alpha Zero to deal with continuous action space. We also provide a preliminary experiment on the Pendulum swing-up task, empirically verifying the feasibility of our approach. Thereby, this work provides a first step towards the application of iterated search and learning in domains with a continuous action space.", "title": "" }, { "docid": "fb162c94248297f35825ff1022ad2c59", "text": "This article traces the evolution of ambulance location and relocation models proposed over the past 30 years. The models are classified in two main categories. Deterministic models are used at the planning stage and ignore stochastic considerations regarding the availability of ambulances. Probabilistic models reflect the fact that ambulances operate as servers in a queueing system and cannot always answer a call. In addition, dynamic models have been developed to repeatedly relocate ambulances throughout the day. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "ada6c6b93b7d2109cd131a653117074a", "text": "Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. e he Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence. This suggests that self-attention might also be well-suited to modeling music. In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length. We propose an algorithm that reduces the intermediate memory requirements to linear in the sequence length. This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long (thousands of steps) compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies1. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-Competition, and obtain state-of-the-art results on the latter.", "title": "" }, { "docid": "aa729fab5a97378b2ce9ae6ae4ee4e66", "text": "Previous information extraction (IE) systems are typically organized as a pipeline architecture of separated stages which make independent local decisions. When the data grows beyond some certain size, the extracted facts become inter-dependent and thus we can take advantage of information redundancy to conduct reasoning across documents and improve the performance of IE. We describe a joint inference approach based on information network structure to conduct cross-fact reasoning with an integer linear programming framework. Without using any additional labeled data this new method obtained 13.7%-24.4% user browsing cost reduction over a state-of-the-art IE system which extracts various types of facts independently.", "title": "" }, { "docid": "4c67486d34309ac506341224e5e7e994", "text": "Image deconvolution is still to be a challenging illposed problem for recovering a clear image from a given blurry image, when the point spread function is known. Although competitive deconvolution methods are numerically impressive and approach theoretical limits, they are becoming more complex, making analysis, and implementation difficult. Furthermore, accurate estimation of the regularization parameter is not easy for successfully solving image deconvolution problems. In this paper, we develop an effective approach for image restoration based on one explicit image filter guided filter. By applying the decouple of denoising and deblurring techniques to the deconvolution model, we reduce the optimization complexity and achieve a simple but effective algorithm to automatically compute the parameter in each iteration, which is based on Morozov’s discrepancy principle. Experimental results demonstrate that the proposed algorithm outperforms many state-of-the-art deconvolution methods in terms of both ISNR and visual quality. Keywords—Image deconvolution, guided filter, edge-preserving, adaptive parameter estimation.", "title": "" }, { "docid": "e858a020c498272ce560656cecf15354", "text": "A low-voltage, low-power CMOS voltage reference with high temperature stability in a wide temperature range is presented. The temperature dependence of mobility and oxide capacitance is removed by employing transistors in saturation and triode regions and the temperature dependence of threshold voltage is removed by exploiting the transistors in weak inversion region. Implemented in 0.13um CMOS, the proposed voltage reference achieves temperature coefficient of 29.3ppm/°C against temperature variation of −50 – 130°C and line sensitivity of 337ppm/V against supply variation of 0.7–1.8V, while consuming 210nW from 0.7V supply and occupying 0.023mm2.", "title": "" }, { "docid": "6a3afa9644477304d2d32d99c99e07c8", "text": "This paper presents a comprehensive survey of five most widely used in-vehicle networks from three perspectives: system cost, data transmission capacity, and fault-tolerance capability. The paper reviews the pros and cons of each network, and identifies possible approaches to improve the quality of service (QoS). In addition, two classifications of automotive gateways have been presented along with a brief discussion about constructing a comprehensive in-vehicle communication system with different networks and automotive gateways. Furthermore, security threats to in-vehicle networks are briefly discussed, along with the corresponding protective methods. The survey concludes with highlighting the trends in future development of in-vehicle network technology and a proposal of a topology of the next generation in-vehicle network.", "title": "" }, { "docid": "bd3e5a403cc42952932a7efbd0d57719", "text": "The acoustic echo cancellation system is very important in the communication applications that are used these days; in view of this importance we have implemented this system practically by using DSP TMS320C6713 Starter Kit (DSK). The acoustic echo cancellation system was implemented based on 8 subbands techniques using Least Mean Square (LMS) algorithm and Normalized Least Mean Square (NLMS) algorithm. The system was evaluated by measuring the performance according to Echo Return Loss Enhancement (ERLE) factor and Mean Square Error (MSE) factor. Keywords—Acoustic echo canceller; Least Mean Square (LMS); Normalized Least Mean Square (NLMS); TMS320C6713; 8 subbands adaptive filter", "title": "" }, { "docid": "527c1e2a78e7f171025231a475a828b9", "text": "Cryptography is the science to transform the information in secure way. Encryption is best alternative to convert the data to be transferred to cipher data which is an unintelligible image or data which cannot be understood by any third person. Images are form of the multimedia data. There are many image encryption schemes already have been proposed, each one of them has its own potency and limitation. This paper presents a new algorithm for the image encryption/decryption scheme which has been proposed using chaotic neural network. Chaotic system produces the same results if the given inputs are same, it is unpredictable in the sense that it cannot be predicted in what way the system's behavior will change for any little change in the input to the system. The objective is to investigate the use of ANNs in the field of chaotic Cryptography. The weights of neural network are achieved based on chaotic sequence. The chaotic sequence generated and forwarded to ANN and weighs of ANN are updated which influence the generation of the key in the encryption algorithm. The algorithm has been implemented in the software tool MATLAB and results have been studied. To compare the relative performance peak signal to noise ratio (PSNR) and mean square error (MSE) are used.", "title": "" }, { "docid": "83dec7aa3435effc3040dfb08cb5754a", "text": "This paper examines the relationship between annual report readability and firm performance and earnings persistence. This is motivated by the Securities and Exchange Commission’s plain English disclosure regulations that attempt to make corporate disclosures easier to read for ordinary investors. I measure the readability of public company annual reports using both the Fog Index from computational linguistics and the length of the document. I find that the annual reports of firms with lower earnings are harder to read (i.e., they have higher Fog and are longer). Moreover, the positive earnings of firms with annual reports that are easier to read are more persistent. This suggests that managers may be opportunistically choosing the readability of annual reports to hide adverse information from investors.", "title": "" }, { "docid": "f8878dd6e858f2acba35bf0f75168815", "text": "BACKGROUND\nPsoriasis can be found at several different localizations which may be of various impact on patients' quality of life (QoL). One of the easy visible, and difficult to conceal localizations are the nails.\n\n\nOBJECTIVE\nTo achieve more insight into the QoL of psoriatic patients with nail psoriasis, and to characterize the patients with nail involvement which are more prone to the impact of the nail alterations caused by psoriasis.\n\n\nMETHOD\nA self-administered questionnaire was distributed to all members (n = 5400) of the Dutch Psoriasis Association. The Dermatology Life Quality Index (DLQI) and the Nail Psoriasis Quality of life 10 (NPQ10) score were included as QoL measures. Severity of cutaneous lesions was determined using the self-administered psoriasis area and severity index (SAPASI).\n\n\nRESULTS\nPatients with nail psoriasis scored significantly higher mean scores on the DLQI (4.9 vs. 3.7, P = <0.001) and showed more severe psoriasis (SAPASI, 6.6 vs. 5.3, P = <0.001). Patients with coexistence of nail bed and nail matrix features showed higher DLQI scores compared with patients with involvement of one of the two localizations exclusively (5.3 vs. 4.2 vs. 4.3, P = 0.003). Patients with only nail bed alterations scored significant higher NPQ10 scores when compared with patients with only nail matrix features. Patients with psoriatic arthritis (PsA) and nail psoriasis experiences more impairments compared with nail psoriasis patients without PsA (DLQI 5.5 vs. 4.3, NPQ10 13.3 vs. 7.0). Females scored higher mean scores on all QoL scores.\n\n\nCONCLUSION\nGreater attention should be paid to the possible impact nail abnormalities have on patients with nail psoriasis, which can be identified by nail psoriasis specific questionnaires such as the NPQ10. As improving the severity of disease may have a positive influence on QoL, the outcome of QoL measurements should be taken into account when deciding on treatment strategies.", "title": "" }, { "docid": "0d62a781e48d6becc93bcac11692a3c2", "text": "A Fresnel lens with electrically-tunable diffraction efficiency while possessing high image quality is demonstrated using a phase-separated composite film (PSCOF). The light scattering-free PSCOF is obtained by anisotropic phase separation between liquid crystal and polymer. Such a lens can be operated below 12 volts and its switching time is reasonably fast (~10 ms). The maximum diffraction efficiency reaches ~35% for a linearly polarized light, which is close to the theoretical limit of 41%.", "title": "" }, { "docid": "d62e79e84e17c6e5b4e397e58077fd75", "text": "We develop a decentralized Bayesian model of college admissions with two ranked colleges, heterogeneous students and two realistic match frictions: students find it costly to apply to college, and college evaluations of their applications are uncertain. Students thus face a portfolio choice problem in their application decision, while colleges choose admissions standards that act like market-clearing prices. Enrollment at each college is affected by the standards at the other college through student portfolio reallocation. In equilibrium, student-college sorting may fail: weaker students sometimes apply more aggressively, and the weaker college might impose higher standards. Applying our framework, we analyze affirmative action, showing how it induces minority applicants to construct their application portfolios as if they were majority students of higher caliber. ∗Earlier versions were called “The College Admissions Problem with Uncertainty” and “A Supply and Demand Model of the College Admissions Problem”. We would like to thank Philipp Kircher (CoEditor) and three anonymous referees for their helpful comments and suggestions. Greg Lewis and Lones Smith are grateful for the financial support of the National Science Foundation. We have benefited from seminars at BU, UCLA, Georgetown, HBS, the 2006 Two-Sided Matching Conference (Bonn), 2006 SED (Vancouver), 2006 Latin American Econometric Society Meetings (Mexico City), and 2007 American Econometric Society Meetings (New Orleans), Iowa State, Harvard/MIT, the 2009 Atlanta NBER Conference, and Concordia. Parag Pathak and Philipp Kircher provided useful discussions of our paper. We are also grateful to John Bound and Brad Hershbein for providing us with student college applications data. †Arizona State University, Department of Economics, Tempe, AZ 85287. ‡Harvard University, Department of Economics, Cambridge, MA 02138. §University of Wisconsin, Department of Economics, Madison, WI 53706.", "title": "" }, { "docid": "459f368625415f80c88da01b69e94258", "text": "Data visualization and feature selection methods are proposed based on the )oint mutual information and ICA. The visualization methods can find many good 2-D projections for high dimensional data interpretation, which cannot be easily found by the other existing methods. The new variable selection method is found to be better in eliminating redundancy in the inputs than other methods based on simple mutual information. The efficacy of the methods is illustrated on a radar signal analysis problem to find 2-D viewing coordinates for data visualization and to select inputs for a neural network classifier.", "title": "" }, { "docid": "5ab4db508bddd2481a867eecd41e6b9a", "text": "For centuries, music has been shared and remembered by two traditions: aural transmission and in the form of written documents normally called musical scores. Many of these scores exist in the form of unpublished manuscripts and hence they are in danger of being lost through the normal ravages of time. To preserve the music some form of typesetting or, ideally, a computer system that can automatically decode the symbolic images and create new scores is required. Programs analogous to optical character recognition systems called optical music recognition (OMR) systems have been under intensive development for many years. However, the results to date are far from ideal. Each of the proposed methods emphasizes different properties and therefore makes it difficult to effectively evaluate its competitive advantages. This article provides an overview of the literature concerning the automatic analysis of images of printed and handwritten musical scores. For self-containment and for the benefit of the reader, an introduction to OMR processing systems precedes the literature overview. The following study presents a reference scheme for any researcher wanting to compare new OMR algorithms against well-known ones.", "title": "" }, { "docid": "de455ce971c40fe49d14415cd8164122", "text": "Cardiovascular disease remains the most common health problem in developed countries, and residual risk after implementing all current therapies is still high. Permanent changes in lifestyle may be hard to achieve and people may not always be motivated enough to make the recommended modifications. Emerging research has explored the application of natural food-based strategies in disease management. In recent years, much focus has been placed on the beneficial effects of fish consumption. Many of the positive effects of fish consumption on dyslipidemia and heart diseases have been attributed to n-3 polyunsaturated fatty acids (n-3 PUFAs, i.e., EPA and DHA); however, fish is also an excellent source of protein and, recently, fish protein hydrolysates containing bioactive peptides have shown promising activities for the prevention/management of cardiovascular disease and associated health complications. The present review will focus on n-3 PUFAs and bioactive peptides effects on cardiovascular disease risk factors. Moreover, since considerable controversy exists regarding the association between n-3 PUFAs and major cardiovascular endpoints, we have also reviewed the main clinical trials supporting or not this association.", "title": "" }, { "docid": "0332be71a529382e82094239db31ea25", "text": "Nguyen and Shparlinski recently presented a polynomial-time algorithm that provably recovers the signer’s secret DSA key when a few bits of the random nonces k (used at each signature generation) are known for a number of DSA signatures at most linear in log q (q denoting as usual the small prime of DSA), under a reasonable assumption on the hash function used in DSA. The number of required bits is about log q, and can be further decreased to 2 if one assumes access to ideal lattice basis reduction, namely an oracle for the lattice closest vector problem for the infinity norm. All previously known results were only heuristic, including those of Howgrave-Graham and Smart who introduced the topic. Here, we obtain similar results for the elliptic curve variant of DSA (ECDSA).", "title": "" } ]
scidocsrr
a85c5f75026c981339b0a94ba6a95ccf
A Systematic Literature Review of Open Government Data Research: Challenges, Opportunities and Gaps
[ { "docid": "0ccc233ea8225de88882883d678793c8", "text": "Sustaining of Moore's Law over the next decade will require not only continued scaling of the physical dimensions of transistors but also performance improvement and aggressive reduction in power consumption. Heterojunction Tunnel FET (TFET) has emerged as promising transistor candidate for supply voltage scaling down to sub-0.5V due to the possibility of sub-kT/q switching without compromising on-current (ION). Recently, n-type III-V HTFET with reasonable on-current and sub-kT/q switching at supply voltage of 0.5V have been experimentally demonstrated. However, steep switching performance of III-V HTFET till date has been limited to range of drain current (IDS) spanning over less than a decade. In this work, we will present progress on complimentary Tunnel FETs and analyze primary roadblocks in the path towards achieving steep switching performance in III-V HTFET.", "title": "" }, { "docid": "1c0efa706f999ee0129d21acbd0ef5ab", "text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN", "title": "" }, { "docid": "299c0b60f9803c4eb60cc900b196a689", "text": "The exponentially growing production of data and the social trend towards openness and sharing are powerful forces that are changing the global economy and society. Governments around the world have become active participants in this evolution, opening up their data for access and re-use by public and private agents alike. The phenomenon of Open Government Data has spread around the world in the last four years, driven by the widely held belief that use of Open Government Data has the ability to generate both economic and social value. However, a cursory review of the popular press, as well as an investigation of academic research and empirical data, reveals the need to further understand the relationship between Open Government Data and value. In this paper, we focus on how use of Open Government Data can bring about new innovative solutions that can generate social and economic value. We apply a critical realist approach to a case study analysis to uncover the mechanisms that can explain how data is transformed to value. We explore the case of Opower, a pioneer in using and transforming data to induce a behavioral change that has resulted in a considerable reduction in energy use over the last six years.", "title": "" }, { "docid": "053470c0115d17ffbcbeea313f2da702", "text": "Although a significant number of public organizations have embraced the idea of open data, many are still reluctant to do this. One root cause is that the publicizing of data represents a shift from a closed to an open system of governance, which has a significant impact upon the relationships between public agencies and the users of open data. Yet no systematic research is available which compares the benefits of an open data with the barriers to its adoption. Based on interviews and a workshop, the benefits and adoption barriers for open data have been derived. The findings show that a gap exists between the promised benefits and barriers. They furthermore suggest that a conceptually simplistic view is often adopted with regard to open data, one which automatically correlates the publicizing of data with use and benefits. Five ‘myths’ are formulated promoting the use of open data and placing the expectations within a realistic perspective. Further, the recommendation is given to take a user’s view and to actively govern the relationship between government and its users.", "title": "" } ]
[ { "docid": "d9870dc31895226f60537b3e8591f9fd", "text": "This paper reports on the design of a low phase noise 76.8 MHz AlN-on-silicon reference oscillator using SiO2 as temperature compensation material. The paper presents profound theoretical optimization of all the important parameters for AlN-on-silicon width extensional mode resonators, filling into the knowledge gap targeting the tens of megahertz frequency range for this type of resonators. Low loading CMOS cross coupled series resonance oscillator is used to reach the-state-of-the-art LTE phase noise specifications. Phase noise of 123 dBc/Hz at 1 kHz, and 162 dBc/Hz at 1 MHz offset is achieved. The oscillator's integrated root mean square RMS jitter is 106 fs (10 kHz to 20 MHz), consuming 850 μA, with startup time of 250 μs, and a figure-of-merit FOM of 216 dB. This work offers a platform for high performance MEMS reference oscillators; where, it shows the applicability of replacing bulky quartz with MEMS resonators in cellular platforms. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3d5c4772d5d73343cc518d062e90f3db", "text": "Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.", "title": "" }, { "docid": "e86247471d4911cb84aa79911547045b", "text": "Creating rich representations of environments requires integration of multiple sensing modalities with complementary characteristics such as range and imaging sensors. To precisely combine multisensory information, the rigid transformation between different sensor coordinate systems (i.e., extrinsic parameters) must be estimated. The majority of existing extrinsic calibration techniques require one or multiple planar calibration patterns (such as checkerboards) to be observed simultaneously from the range and imaging sensors. The main limitation of these approaches is that they require modifying the scene with artificial targets. In this paper, we present a novel algorithm for extrinsically calibrating a range sensor with respect to an image sensor with no requirement of external artificial targets. The proposed method exploits natural linear features in the scene to precisely determine the rigid transformation between the coordinate frames. First, a set of 3D lines (plane intersection and boundary line segments) are extracted from the point cloud, and a set of 2D line segments are extracted from the image. Correspondences between the 3D and 2D line segments are used as inputs to an optimization problem which requires jointly estimating the relative translation and rotation between the coordinate frames. The proposed method is not limited to any particular types or configurations of sensors. To demonstrate robustness, efficiency and generality of the presented algorithm, we include results using various sensor configurations.", "title": "" }, { "docid": "14cb0e8fc4e8f82dc4e45d8562ca4bb2", "text": "Information security is one of the most important factors to be considered when secret information has to be communicated between two parties. Cryptography and steganography are the two techniques used for this purpose. Cryptography scrambles the information, but it reveals the existence of the information. Steganography hides the actual existence of the information so that anyone else other than the sender and the recipient cannot recognize the transmission. In steganography the secret information to be communicated is hidden in some other carrier in such a way that the secret information is invisible. In this paper an image steganography technique is proposed to hide audio signal in image in the transform domain using wavelet transform. The audio signal in any format (MP3 or WAV or any other type) is encrypted and carried by the image without revealing the existence to anybody. When the secret information is hidden in the carrier the result is the stego signal. In this work, the results show good quality stego signal and the stego signal is analyzed for different attacks. It is found that the technique is robust and it can withstand the attacks. The quality of the stego image is measured by Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM), Universal Image Quality Index (UIQI). The quality of extracted secret audio signal is measured by Signal to Noise Ratio (SNR), Squared Pearson Correlation Coefficient (SPCC). The results show good values for these metrics. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Graph Algorithms, High Performance Implementations and Applications (ICGHIA2014).", "title": "" }, { "docid": "f85a8a7e11a19d89f2709cc3c87b98fc", "text": "This paper presents novel store-and-forward packet routing algorithms for Wireless Body Area Networks (WBAN) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. On-body DTN routing protocols are then developed using a stochastic link cost formulation, capturing multi-scale topological localities in human postural movements. Performance of the proposed protocols are evaluated experimentally and via simulation, and are compared with a number of existing single-copy DTN routing protocols and an on-body packet flooding mechanism that serves as a performance benchmark with delay lower-bound. It is shown that via multi-scale modeling of the spatio-temporal locality of on-body link disconnection patterns, the proposed algorithms can provide better routing performance compared to a number of existing probabilistic, opportunistic, and utility-based DTN routing protocols in the literature.", "title": "" }, { "docid": "e7f9e290eb7cc21b4a0785430546a33b", "text": "In this study, 306 individuals in 3 age groups--adolescents (13-16), youths (18-22), and adults (24 and older)--completed 2 questionnaire measures assessing risk preference and risky decision making, and 1 behavioral task measuring risk taking. Participants in each age group were randomly assigned to complete the measures either alone or with 2 same-aged peers. Analyses indicated that (a) risk taking and risky decision making decreased with age; (b) participants took more risks, focused more on the benefits than the costs of risky behavior, and made riskier decisions when in peer groups than alone; and (c) peer effects on risk taking and risky decision making were stronger among adolescents and youths than adults. These findings support the idea that adolescents are more inclined toward risky behavior and risky decision making than are adults and that peer influence plays an important role in explaining risky behavior during adolescence.", "title": "" }, { "docid": "5481f319296c007412e62129d2ec5943", "text": "We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.", "title": "" }, { "docid": "4a29051479ac4b3ad7e7cd84540dbdb6", "text": "A compact, shared-aperture antenna (SAA) configuration consisting of various planar antennas embedded into a single footprint is presented in this article. An L-probefed, suspended-plate, horizontally polarized antenna operating in an 900-MHz band; an aperture-coupled, vertically polarized, microstrip antenna operating at 4.2-GHz; a 2 &#x000D7; 2 microstrip patch array operating at the X band; a low-side-lobe level (SLL), corporate-fed, 8 &#x000D7; 4 microstrip planar array for synthetic aperture radar (SAR) in the X band; and a printed, single-arm, circularly polarized, tilted-beam spiral antenna operating at the C band are integrated into a single aperture for simultaneous operation. This antenna system could find potential application in many airborne and unmanned aircraft vehicle (UAV) technologies. While the design of these antennas is not that critical, their optimal placement in a compact configuration for simultaneous operation with minimal interference poses a significant challenge to the designer. The placement optimization was arrived at based on extensive numerical fullwave optimizations.", "title": "" }, { "docid": "e6f34f5b5cae1b2e8d7387e9154284ed", "text": "In this paper the fundamental knowledge of a variable reluctance resolver is presented and an analytical model is demonstrated. With the simulation results are calculated and validated by measurements on a sensor test bench. Based on the introduced model, mechanical and electrical failures of any variable reluctance sensor can be analyzed. The model based simulation is compared to the measurement results and future prospects are given.", "title": "" }, { "docid": "41a54cd203b0964a6c3d9c2b3addff46", "text": "Increasing occupancy rates and revenue by improving customer experience is the aim of modern hospitality organizations. To achieve these results, hotel managers need to have a deep knowledge of customers’ needs, behavior, and preferences and be aware of the ways in which the services delivered create value for the customers and then stimulate their retention and loyalty. In this article a methodological framework to analyze the guest–hotel relationship and to profile hotel guests is discussed, focusing on the process of designing a customer information system and particularly the guest information matrix on which the system database will be built.", "title": "" }, { "docid": "2ff3d496f0174ffc0e3bd21952c8f0ae", "text": "Each time a latency in responding to a stimulus is measured, we owe a debt to F. C. Donders, who in the mid-19th century made the fundamental discovery that the time required to perform a mental computation reveals something fundamental about how the mind works. Donders expressed the idea in the following simple and optimistic statement about the feasibility of measuring the mind: “Will all quantitative treatment of mental processes be out of the question then? By no means! An important factor seemed to be susceptible to measurement: I refer to the time required for simple mental processes” (Donders, 1868/1969, pp. 413–414). With particular variations of simple stimuli and subjects’ choices, Donders demonstrated that it is possible to bring order to understanding invisible thought processes by computing the time that elapses between stimulus presentation and response production. A more specific observation he offered lies at the center of our own modern understanding of mental operations:", "title": "" }, { "docid": "67beb9dbd03ae20d4e45a928fdb61f47", "text": "representation of the game. It was programmed in LI SP. Further use of abstraction was also studied by Friedenbach (1980). The combination of s earch, heuristics, and expert systems led to the best programs in the eighties. At the end of the eighties a new type of Go programs emerged. Th ese programs made an intensive use of pattern recognition. This approach was dis cussed in detail by Boon (1990). In the following years, different AI techniques, such as Rei nforcement Learning (Schraudolph, Dayan, and Sejnowski, 1993), Monte Carlo (Br ügmann, 1993), and Neural Networks (Richards, Moriarty, and Miikkulainen, 1998), were tested in Go. However, programs applying these techniques were not able to surpass the level of the best programs. The combination of search, heuristics, expert systems, and pattern r ecognition remained the winning methodology. Brügmann (1993) proposed to use Monte-Carlo evaluations as an lter ative technique for Computer Go. His idea did not got many followers in the 199 0s. In the following decade, Bouzy and Helmstetter (2003) and Bouzy (2006) combined Mont e-Carlo evaluations and search in Indigo. The program won three bronze medals at the O lympiads of 2004, 2005, and 2006. Their pioneering research inspired the developme nt of Monte-Carlo Tree Search (MCTS) (Coulom, 2006; Kocsis and Szepesv ári, 2006; Chaslot et al., 2006a). Since 2007, MCTS programs are dominating the Computer Go field. MCTS will be explained in the next chapter. 2.6 Go Programs MANGO and MOGO In this subsection, we briefly describe the Go programs M ANGO and MOGO that we use for the experiments in the thesis. Their performance in vari ous tournaments is discussed as well.4", "title": "" }, { "docid": "662fef280f2d03ae535bfbcc06f32810", "text": "This paper describes a voiceless speech recognition technique that utilizes dynamic visual features to represent the facial movements during phonation. The dynamic features extracted from the mouth video are used to classify utterances without using the acoustic data. The audio signals of consonants are more confusing than vowels and the facial movements involved in pronunciation of consonants are more discernible. Thus, this paper focuses on identifying consonants using visual information. This paper adopts a visual speech model that categorizes utterances into sequences of smallest visually distinguishable units known as visemes. The viseme model used is based on the viseme model of Moving Picture Experts Group 4 (MPEG-4) standard. The facial movements are segmented from the video data using motion history images (MHI). MHI is a spatio-temporal template (grayscale image) generated from the video data using accumulative image subtraction technique. The proposed approach combines discrete stationary wavelet transform (SWT) and Zernike moments to extract rotation invariant features from the MHI. A feedforward multilayer perceptron (MLP) neural network is used to classify the features based on the patterns of visible facial movements. The preliminary experimental results indicate that the proposed technique is suitable for recognition of English consonants.", "title": "" }, { "docid": "9ea0612f646228a3da41b7f55c23e825", "text": "It is shown that many published models for the Stanford Question Answering Dataset (Rajpurkar et al., 2016) lack robustness, suffering an over 50% decrease in F1 score during adversarial evaluation based on the AddSent (Jia and Liang, 2017) algorithm. It has also been shown that retraining models on data generated by AddSent has limited effect on their robustness. We propose a novel alternative adversary-generation algorithm, AddSentDiverse, that significantly increases the variance within the adversarial training data by providing effective examples that punish the model for making certain superficial assumptions. Further, in order to improve robustness to AddSent’s semantic perturbations (e.g., antonyms), we jointly improve the model’s semantic-relationship learning capabilities in addition to our AddSentDiversebased adversarial training data augmentation. With these additions, we show that we can make a state-of-the-art model significantly more robust, achieving a 36.5% increase in F1 score under many different types of adversarial evaluation while maintaining performance on the regular SQuAD task.", "title": "" }, { "docid": "54477e35cf5cfcfc61e4dc675449a068", "text": "Nowadays the amount of data that is being generated every day is increasing in a high level for various sectors. In fact, this volume and diversity of data push us to think wisely for a better solution to store, process and analyze it in the right way. Taking into consideration the healthcare industry, there is a great benefit for using the concept of big data, due to the diversity of data that we are dealing with, the extant, and the velocity which lead us to think about providing the best care for the patients. In this paper, we aim to present a new architecture model for health data. The framework supports the storage and the management of unstructured medical data in a distributed environment based on multi-agent paradigm. The integration of the mobile agent model into hadoop ecosystem will give us the opportunity to enable instant communication process between multiple health repositories.", "title": "" }, { "docid": "09c9a0990946fd884df70d4eeab46ecc", "text": "Studies of technological change constitute a field of growing importance and sophistication. In this paper we contribute to the discussion with a methodological reflection and application of multi-stage patent citation analysis for the mea surement of inventive progress. Investigating specific patterns of patent citation data, we conclude that single-stage citation analysis cannot reveal technological paths or linea ges. Therefore, one should also make use of indirect citations and bibliographical coupling. To measure aspects of cumulative inventive progress, we develop a “shared specialization measu r ” of patent families. We relate this measure to an expert rating of the technological va lue dded in the field of variable valve actuation for internal combustion engines. In sum, the study presents promising evidence for multi-stage patent citation analysis in order to ex plain aspects of technological change. JEL classification: O31", "title": "" }, { "docid": "72a283eda92eb25404536308d8909999", "text": "This paper presents a 128.7nW analog front-end amplifier and Gm-C filter for biomedical sensing applications, specifically for Electroencephalogram (EEG) use. The proposed neural amplifier has a supply voltage of 1.8V, consumes a total current of 71.59nA, for a total dissipated power of 128nW and has a gain of 40dB. Also, a 3th order Butterworth Low Pass Gm-C Filter with a 14.7nS transconductor is designed and presented. The filter has a pass band suitable for use in EEG (1-100Hz). The amplifier and filter utilize current sources without resistance which provide 56nA and (1.154nA ×5) respectively. The proposed amplifier occupies and area of 0.26mm2 in 0.3μm TSMC process.", "title": "" }, { "docid": "fecfd19eaf90b735cf00e727fca768b8", "text": "Real-time detection of irregularities in visual data is very invaluable and useful in many prospective applications including surveillance, patient monitoring systems, etc. With the surge of deep learning methods in the recent years, researchers have tried a wide spectrum of methods for different applications. However, for the case of irregularity or anomaly detection in videos, training an end-to-end model is still an open challenge, since often irregularity is not well-defined and there are not enough irregular samples to use during training. In this paper, inspired by the success of generative adversarial networks (GANs) for training deep models in unsupervised or self-supervised settings, we propose an end-to-end deep network for detection and fine localization of irregularities in videos (and images). Our proposed architecture is composed of two networks, which are trained in competing with each other while collaborating to find the irregularity. One network works as a pixel-level irregularity Inpainter, and the other works as a patch-level Detector. After an adversarial self-supervised training, in which I tries to fool D into accepting its inpainted output as regular (normal), the two networks collaborate to detect and fine-segment the irregularity in any given testing video. Our results on three different datasets show that our method can outperform the state-of-the-art and fine-segment the irregularity. 1", "title": "" }, { "docid": "f1e646a0627a5c61a0f73a41d35ccac7", "text": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.", "title": "" } ]
scidocsrr
4c7b94f0e7470fdd5d62b4174ecb3c7c
Please Share! Online Word of Mouth and Charitable Crowdfunding
[ { "docid": "befc5dbf4da526963f8aa180e1fda522", "text": "Charities publicize the donations they receive, generally according to dollar categories rather than the exact amount. Donors in turn tend to give the minimum amount necessary to get into a category. These facts suggest that donors have a taste for having their donations made public. This paper models the effects of such a taste for ‘‘prestige’’ on the behavior of donors and charities. I show how a taste for prestige means that charities can increase donations by using categories. The paper also discusses the effect of a taste for prestige on competition between charities.  1998 Elsevier Science S.A.", "title": "" } ]
[ { "docid": "976f16e21505277525fa697876b8fe96", "text": "A general technique for obtaining intermediate-band crystal filters from prototype low-pass (LP) networks which are neither symmetric nor antimetric is presented. This immediately enables us to now realize the class of low-transient responses. The bandpass (BP) filter appears as a cascade of symmetric lattice sections, obtained by partitioning the LP prototype filter, inserting constant reactances where necessary, and then applying the LP to BP frequency transformation. Manuscript received January 7, 1974; revised October 9, 1974. The author is with the Systems Development Division, Westinghouse Electric Corporation, Baltimore, Md. The cascade is composed of only two fundamental sections. Finally, the method introduced is illustrated with an example.", "title": "" }, { "docid": "16f96e68b19fb561d2232ea4e586bb2e", "text": "In this letter, charge-based capacitance measurement (CBCM) is applied to characterize bias-dependent capacitances in a CMOS transistor. Due to its special advantage of being free from the errors induced by charge injection, the operation of charge-injection-induced-error-free CBCM allows for the extraction of full-range gate capacitance from the accumulation region to the inversion region and the overlap capacitance of MOSFET devices with submicrometer dimensions.", "title": "" }, { "docid": "c17522f4b9f3b229dae56b394adb69a1", "text": "This paper investigates fault effects and error propagation in a FlexRay-based network with hybrid topology that includes a bus subnetwork and a star subnetwork. The investigation is based on about 43500 bit-flip fault injection inside different parts of the FlexRay communication controller. To do this, a FlexRay communication controller is modeled by Verilog HDL at the behavioral level. Then, this controller is exploited to setup a FlexRay-based network composed of eight nodes (four nodes in the bus subnetwork and four nodes in the star subnetwork). The faults are injected in a node of the bus subnetwork and a node of the star subnetwork of the hybrid network Then, the faults resulting in the three kinds of errors, namely, content errors, syntax errors and boundary violation errors are characterized. The results of fault injection show that boundary violation errors and content errors are negligibly propagated to the star subnetwork and syntax errors propagation is almost equal in the both bus and star subnetworks. Totally, the percentage of errors propagation in the bus subnetwork is more than the star subnetwork.", "title": "" }, { "docid": "ec36f5a41650cc6c3ba17eb6bd928677", "text": "Deep learning techniques based on Convolutional Neural Networks (CNNs) are extensively used for the classification of hyperspectral images. These techniques present high computational cost. In this paper, a GPU (Graphics Processing Unit) implementation of a spatial-spectral supervised classification scheme based on CNNs and applied to remote sensing datasets is presented. In particular, two deep learning libraries, Caffe and CuDNN, are used and compared. In order to achieve an efficient GPU projection, different techniques and optimizations have been applied. The implemented scheme comprises Principal Component Analysis (PCA) to extract the main features, a patch extraction around each pixel to take the spatial information into account, one convolutional layer for processing the spectral information, and fully connected layers to perform the classification. To improve the initial GPU implementation accuracy, a second convolutional layer has been added. High speedups are obtained together with competitive classification accuracies.", "title": "" }, { "docid": "83da776714bf49c3bbb64976d20e26a2", "text": "Orthogonal frequency division multiplexing (OFDM) has been widely adopted in modern wireless communication systems due to its robustness against the frequency selectivity of wireless channels. For coherent detection, channel estimation is essential for receiver design. Channel estimation is also necessary for diversity combining or interference suppression where there are multiple receive antennas. In this paper, we will present a survey on channel estimation for OFDM. This survey will first review traditional channel estimation approaches based on channel frequency response (CFR). Parametric model (PM)-based channel estimation, which is particularly suitable for sparse channels, will be also investigated in this survey. Following the success of turbo codes and low-density parity check (LDPC) codes, iterative processing has been widely adopted in the design of receivers, and iterative channel estimation has received a lot of attention since that time. Iterative channel estimation will be emphasized in this survey as the emerging iterative receiver improves system performance significantly. The combination of multiple-input multiple-output (MIMO) and OFDM has been widely accepted in modern communication systems, and channel estimation in MIMO-OFDM systems will also be addressed in this survey. Open issues and future work are discussed at the end of this paper.", "title": "" }, { "docid": "3251674643f09b73a24d037dc1076c72", "text": "Although the link between sagittal plane motion and exercise intensity has been highlighted, no study assessed if different workloads lead to changes in three-dimensional cycling kinematics. This study compared three-dimensional joint and segment kinematics between competitive and recreational road cyclists across different workloads. Twenty-four road male cyclists (12 competitive and 12 recreational) underwent an incremental workload test to determine aerobic peak power output. In a following session, cyclists performed four trials at sub-maximal workloads (65, 75, 85 and 95% of their aerobic peak power output) at 90 rpm of pedalling cadence. Mean hip adduction, thigh rotation, shank rotation, pelvis inclination (latero-lateral and anterior-posterior), spine inclination and rotation were computed at the power section of the crank cycle (12 o'clock to 6 o'clock crank positions) using three-dimensional kinematics. Greater lateral spine inclination (p < .01, 5-16%, effect sizes = 0.09-0.25) and larger spine rotation (p < .01, 16-29%, effect sizes = 0.31-0.70) were observed for recreational cyclists than competitive cyclists across workload trials. No differences in segment and joint angles were observed from changes in workload with significant individual effects on spine inclination (p < .01). No workload effects were found in segment angles but differences, although small, existed when comparing competitive road to recreational cyclists. When conducting assessment of joint and segment motions, workload between 65 and 95% of individual cyclists' peak power output could be used.", "title": "" }, { "docid": "1e80983e98d5d94605315b8ef45af0fd", "text": "Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present Population Based Training (PBT), a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance.", "title": "" }, { "docid": "e77cf8938714824d46cfdbdb1b809f93", "text": "Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fullyobserved samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines.", "title": "" }, { "docid": "9fa8133dcb3baef047ee887fea1ed5a3", "text": "In this paper, we present an effective hierarchical shot classification scheme for broadcast soccer video. We first partition a video into replay and non-replay shots with replay logo detection. Then, non-replay shots are further classified into Long, Medium, Close-up or Out-field types with color and texture features based on a decision tree. We tested the method on real broadcast FIFA soccer videos, and the experimental results demonstrate its effectiveness..", "title": "" }, { "docid": "3d3589a002f8195bb20324dd8a8f5d76", "text": "Vacuum-based end effectors are widely used in industry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an external gravity wrench. To characterize grasps, we estimate robustness to perturbations in end-effector and object pose, material properties, and external wrenches. We analyze grasps across 1,500 3D object models to generate Dex-Net 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels. We use Dex-Net 3.0 to train a Grasp Quality Convolutional Neural Network (GQ-CNN) to classify robust suction targets in point clouds containing a single object. We evaluate the resulting system in 350 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When evaluated on novel objects that we categorize as Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suction-grasp points) Dex-Net 3.0 achieves success rates of 98%, 82%, and 58% respectively, improving to 81% in the latter case when the training set includes only adversarial objects. Code, datasets, and supplemental material can be found at http://berkeleyautomation.github.io/dex-net.", "title": "" }, { "docid": "541de3d6af2edacf7396e5ca66c385e2", "text": "This paper presents a simple and intuitive method for mining search engine query logs to get fast query recommendations on a large scale industrial strength search engine. In order to get a more comprehensive solution, we combine two methods together. On the one hand, we study and model search engine users' sequential search behavior, and interpret this consecutive search behavior as client-side query refinement, that should form the basis for the search engine's own query refinement process. On the other hand, we combine this method with a traditional content based similarity method to compensate for the high sparsity of real query log data, and more specifically, the shortness of most query sessions. To evaluate our method, we use one hundred day worth query logs from SINA' search engine to do off-line mining. Then we analyze three independent editors evaluations on a query test set. Based on their judgement, our method was found to be effective for finding related queries, despite its simplicity. In addition to the subjective editors' rating, we also perform tests based on actual anonymous user search sessions.", "title": "" }, { "docid": "dac8564305055eaf9291e731dbf9a44d", "text": "Named Entity Recognition and classification (NERC) is an essential and challenging task in (NLP). Kann ada is a highly inflectional and agglutinating language prov iding one of the richest and most challenging sets of linguistic and statistical features resulting in long and complex word forms, which is large in number. It is primarily a suffixi ng Language and inflected word starts with a root and may have several suffix es added to the right. It is also a Freeword order Language. Like other Indian languages, it is a resource poor language. Annotate d corpora, name dictionaries, good morphological an lyzers, Parts of Speech (POS) taggers etc. are not yet available in the req ui d measure and not many works are reported for t his language. The work related to NERC in Kannada is not yet reported. In recent years, automatic named entity recognition an d extraction systems have become one of the popular research areas. Building NERC for Kannada is challenging. It seeks to classi fy words which represent names in text into predefined categories like perso n name, location, organization, date, time etc. Thi s paper deals with some attempts in this direction. This work starts with e xp riments in building Semi-Automated Statistical M achine learning NLP Models based on Noun Taggers. In this paper we have de loped an algorithm based on supervised learnin g techniques that include Hidden Markov Model (HMM). Some sample resu lts are reported.", "title": "" }, { "docid": "055c9fad6d2f246fc1b6cbb1bce26a92", "text": "This work uses deep learning models for daily directional movements prediction of a stock price using financial news titles and technical indicators as input. A comparison is made between two different sets of technical indicators, set 1: Stochastic %K, Stochastic %D, Momentum, Rate of change, William’s %R, Accumulation/Distribution (A/D) oscillator and Disparity 5; set 2: Exponential Moving Average, Moving Average Convergence-Divergence, Relative Strength Index, On Balance Volume and Bollinger Bands. Deep learning methods can detect and analyze complex patterns and interactions in the data allowing a more precise trading process. Experiments has shown that Convolutional Neural Network (CNN) can be better than Recurrent Neural Networks (RNN) on catching semantic from texts and RNN is better on catching the context information and modeling complex temporal characteristics for stock market forecasting. So, there are two models compared in this paper: a hybrid model composed by a CNN for the financial news and a Long Short-Term Memory (LSTM) for technical indicators, named as SI-RCNN; and a LSTM network only for technical indicators, named as I-RNN. The output of each model is used as input for a trading agent that buys stocks on the current day and sells the next day when the model predicts that the price is going up, otherwise the agent sells stocks on the current day and buys the next day. The proposed method shows a major role of financial news in stabilizing the results and almost no improvement when comparing different sets of technical indicators.", "title": "" }, { "docid": "caac45f02e29295d592ee784697c6210", "text": "The studies included in this PhD thesis examined the interactions of syphilis, which is caused by Treponema pallidum, and HIV. Syphilis reemerged worldwide in the late 1990s and hereafter increasing rates of early syphilis were also reported in Denmark. The proportion of patients with concurrent HIV has been substantial, ranging from one third to almost two thirds of patients diagnosed with syphilis some years. Given that syphilis facilitates transmission and acquisition of HIV the two sexually transmitted diseases are of major public health concern. Further, syphilis has a negative impact on HIV infection, resulting in increasing viral loads and decreasing CD4 cell counts during syphilis infection. Likewise, HIV has an impact on the clinical course of syphilis; patients with concurrent HIV are thought to be at increased risk of neurological complications and treatment failure. Almost ten per cent of Danish men with syphilis acquired HIV infection within five years after they were diagnosed with syphilis during an 11-year study period. Interestingly, the risk of HIV declined during the later part of the period. Moreover, HIV-infected men had a substantial increased risk of re-infection with syphilis compared to HIV-uninfected men. As one third of the HIV-infected patients had viral loads >1,000 copies/ml, our conclusion supported the initiation of cART in more HIV-infected MSM to reduce HIV transmission. During a five-year study period, including the majority of HIV-infected patients from the Copenhagen area, we observed that syphilis was diagnosed in the primary, secondary, early and late latent stage. These patients were treated with either doxycycline or penicillin and the rate of treatment failure was similar in the two groups, indicating that doxycycline can be used as a treatment alternative - at least in an HIV-infected population. During a four-year study period, the T. pallidum strain type distribution was investigated among patients diagnosed by PCR testing of material from genital lesions. In total, 22 strain types were identified. HIV-infected patients were diagnosed with nine different strains types and a difference by HIV status was not observed indicating that HIV-infected patients did not belong to separate sexual networks. In conclusion, concurrent HIV remains common in patients diagnosed with syphilis in Denmark, both in those diagnosed by serological testing and PCR testing. Although the rate of syphilis has stabilized in recent years, a spread to low-risk groups is of concern, especially due to the complex symptomatology of syphilis. However, given the efficient treatment options and the targeted screening of pregnant women and persons at higher risk of syphilis, control of the infection seems within reach. Avoiding new HIV infections is the major challenge and here cART may play a prominent role.", "title": "" }, { "docid": "dc3de555216f10d84890ecb1165774ff", "text": "Research into the visual perception of human emotion has traditionally focused on the facial expression of emotions. Recently researchers have turned to the more challenging field of emotional body language, i.e. emotion expression through body pose and motion. In this work, we approach recognition of basic emotional categories from a computational perspective. In keeping with recent computational models of the visual cortex, we construct a biologically plausible hierarchy of neural detectors, which can discriminate seven basic emotional states from static views of associated body poses. The model is evaluated against human test subjects on a recent set of stimuli manufactured for research on emotional body language.", "title": "" }, { "docid": "c699ede2caeb5953decc55d8e42c2741", "text": "Traditionally, two distinct approaches have been employed for exploratory factor analysis: maximum likelihood factor analysis and principal component analysis. A third alternative, called regularized exploratory factor analysis, was introduced recently in the psychometric literature. Small sample size is an important issue that has received considerable discussion in the factor analysis literature. However, little is known about the differential performance of these three approaches to exploratory factor analysis in a small sample size scenario. A simulation study and an empirical example demonstrate that regularized exploratory factor analysis may be recommended over the two traditional approaches, particularly when sample sizes are small (below 50) and the sample covariance matrix is near singular.", "title": "" }, { "docid": "6dbe972f08097355b32685c5793f853a", "text": "BACKGROUND/AIMS\nRheumatoid arthritis (RA) is a serious health problem resulting in significant morbidity and disability. Tai Chi may be beneficial to patients with RA as a result of effects on muscle strength and 'mind-body' interactions. To obtain preliminary data on the effects of Tai Chi on RA, we conducted a pilot randomized controlled trial. Twenty patients with functional class I or II RA were randomly assigned to Tai Chi or attention control in twice-weekly sessions for 12 weeks. The American College of Rheumatology (ACR) 20 response criterion, functional capacity, health-related quality of life and the depression index were assessed.\n\n\nRESULTS\nAt 12 weeks, 5/10 patients (50%) randomized to Tai Chi achieved an ACR 20% response compared with 0/10 (0%) in the control (p = 0.03). Tai Chi had greater improvement in the disability index (p = 0.01), vitality subscale of the Medical Outcome Study Short Form 36 (p = 0.01) and the depression index (p = 0.003). Similar trends to improvement were also observed for disease activity, functional capacity and health-related quality of life. No adverse events were observed and no patients withdrew from the study.\n\n\nCONCLUSION\nTai Chi appears safe and may be beneficial for functional class I or II RA. These promising results warrant further investigation into the potential complementary role of Tai Chi for treatment of RA.", "title": "" }, { "docid": "38a8471eb20b08499136ef459eb866c2", "text": "Some recent studies suggest that in progressive multiple sclerosis, neurodegeneration may occur independently from inflammation. The aim of our study was to analyse the interdependence of inflammation, neurodegeneration and disease progression in various multiple sclerosis stages in relation to lesional activity and clinical course, with a particular focus on progressive multiple sclerosis. The study is based on detailed quantification of different inflammatory cells in relation to axonal injury in 67 multiple sclerosis autopsies from different disease stages and 28 controls without neurological disease or brain lesions. We found that pronounced inflammation in the brain is not only present in acute and relapsing multiple sclerosis but also in the secondary and primary progressive disease. T- and B-cell infiltrates correlated with the activity of demyelinating lesions, while plasma cell infiltrates were most pronounced in patients with secondary progressive multiple sclerosis (SPMS) and primary progressive multiple sclerosis (PPMS) and even persisted, when T- and B-cell infiltrates declined to levels seen in age matched controls. A highly significant association between inflammation and axonal injury was seen in the global multiple sclerosis population as well as in progressive multiple sclerosis alone. In older patients (median 76 years) with long-disease duration (median 372 months), inflammatory infiltrates declined to levels similar to those found in age-matched controls and the extent of axonal injury, too, was comparable with that in age-matched controls. Ongoing neurodegeneration in these patients, which exceeded the extent found in normal controls, could be attributed to confounding pathologies such as Alzheimer's or vascular disease. Our study suggests a close association between inflammation and neurodegeneration in all lesions and disease stages of multiple sclerosis. It further indicates that the disease processes of multiple sclerosis may die out in aged patients with long-standing disease.", "title": "" }, { "docid": "e75620184f4baca454af714daf5e7801", "text": "Although fingerprint experts have presented evidence in criminal courts for more than a century, there have been few scientific investigations of the human capacity to discriminate these patterns. A recent latent print matching experiment shows that qualified, court-practicing fingerprint experts are exceedingly accurate (and more conservative) compared with novices, but they do make errors. Here, a rationale for the design of this experiment is provided. We argue that fidelity, generalizability, and control must be balanced to answer important research questions; that the proficiency and competence of fingerprint examiners are best determined when experiments include highly similar print pairs, in a signal detection paradigm, where the ground truth is known; and that inferring from this experiment the statement \"The error rate of fingerprint identification is 0.68%\" would be unjustified. In closing, the ramifications of these findings for the future psychological study of forensic expertise and the implications for expert testimony and public policy are considered.", "title": "" } ]
scidocsrr
942371f9a23a5bae9dd577d4a892384f
From Benedict Cumberbatch to Sherlock Holmes: Character Identification in TV series without a Script
[ { "docid": "d5a4c2d61e7d65f1972ed934f399847e", "text": "We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor/action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor/action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty.", "title": "" } ]
[ { "docid": "c0e1be5859be1fc5871993193a709f2d", "text": "This paper reviews the possible causes and effects for no-fault-found observations and intermittent failures in electronic products and summarizes them into cause and effect diagrams. Several types of intermittent hardware failures of electronic assemblies are investigated, and their characteristics and mechanisms are explored. One solder joint intermittent failure case study is presented. The paper then discusses when no-fault-found observations should be considered as failures. Guidelines for assessment of intermittent failures are then provided in the discussion and conclusions. Ó 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "aed8a983fc25d2c1c71401b338d8f5f3", "text": "Heart disease is the leading cause of death in the world over the past 10 years. Researchers have been using several data mining techniques to help health care professionals in the diagnosis of heart disease. Decision Tree is one of the successful data mining techniques used. However, most research has applied J4.8 Decision Tree, based on Gain Ratio and binary discretization. Gini Index and Information Gain are two other successful types of Decision Trees that are less used in the diagnosis of heart disease. Also other discretization techniques, voting method, and reduced error pruning are known to produce more accurate Decision Trees. This research investigates applying a range of techniques to different types of Decision Trees seeking better performance in heart disease diagnosis. A widely used benchmark data set is used in this research. To evaluate the performance of the alternative Decision Trees the sensitivity, specificity, and accuracy are calculated. The research proposes a model that outperforms J4.8 Decision Tree and Bagging algorithm in the diagnosis of heart disease patients.", "title": "" }, { "docid": "f5eb797695e17d59ed9359456a8acfc8", "text": "The availability of inexpensive CMOS technologies that perform well at microwave frequencies has created new opportunities for automated material handling within supply chain management (SCM) that in hindsight, be viewed as revolutionary. This article outlines the system architecture and circuit design considerations that influence the development of radio frequency identification (RFID) tags through a case study involving a high-performance implementation that achieves a throughput of nearly 800 tags/s at a range greater than 10 m. The impact of a novel circuit design approach ideally suited to the power and die area challenges is also discussed. Insights gleaned from first-generation efforts are reviewed as an object lesson in how to make RFID technology for SCM, at a cost measured in pennies per tag, reach its full potential through a generation 2 standard.", "title": "" }, { "docid": "726728a9ada1d4823ce5420d57b80201", "text": "OBJECTIVE\nTo investigate the association of muscle function and subgroups of low back pain (no low back pain, pelvic girdle pain, lumbar pain and combined pelvic girdle pain and lumbar pain) in relation to pregnancy.\n\n\nDESIGN\nProspective cohort study.\n\n\nSUBJECTS\nConsecutively enrolled pregnant women seen in gestational weeks 12-18 (n = 301) and 3 months postpartum (n = 262).\n\n\nMETHODS\nClassification into subgroups by means of mechanical assessment of the lumbar spine, pelvic pain provocation tests, standard history and a pain drawing. Trunk muscle endurance, hip muscle strength (dynamometer) and gait speed were investigated.\n\n\nRESULTS\nIn pregnancy 116 women had no low back pain, 33% (n = 99) had pelvic girdle pain, 11% (n = 32) had lumbar pain and 18% (n = 54) had combined pelvic girdle pain and lumbar pain. The prevalence of pelvic girdle pain/combined pelvic girdle pain and lumbar pain decreased postpartum, whereas the prevalence of lumbar pain remained stable. Women with pelvic girdle pain and/or combined pelvic girdle pain and lumbar pain had lower values for trunk muscle endurance, hip extension and gait speed as compared to women without low back pain in pregnancy and postpartum (p < 0.001-0.04). Women with pelvic girdle pain throughout the study had lower values of back flexor endurance compared with women without low back pain.\n\n\nCONCLUSION\nMuscle dysfunction was associated with pelvic girdle pain, which should be taken into consideration when developing treatment strategies and preventive measures.", "title": "" }, { "docid": "3c41bdaeaaa40481c8e68ad00426214d", "text": "Image captioning is an important task, applicable to virtual assistants, editing tools, image indexing, and support of the disabled. In recent years significant progress has been made in image captioning, using Recurrent Neural Networks powered by long-short-term-memory (LSTM) units. Despite mitigating the vanishing gradient problem, and despite their compelling ability to memorize dependencies, LSTM units are complex and inherently sequential across time. To address this issue, recent work has shown benefits of convolutional networks for machine translation and conditional image generation [9, 34, 35]. Inspired by their success, in this paper, we develop a convolutional image captioning technique. We demonstrate its efficacy on the challenging MSCOCO dataset and demonstrate performance on par with the LSTM baseline [16], while having a faster training time per number of parameters. We also perform a detailed analysis, providing compelling reasons in favor of convolutional language generation approaches.", "title": "" }, { "docid": "b3012ab055e3f4352b3473700c30c085", "text": "Zero-shot recognition (ZSR) deals with the problem of predicting class labels for target domain instances based on source domain side information (e.g. attributes) of unseen classes. We formulate ZSR as a binary prediction problem. Our resulting classifier is class-independent. It takes an arbitrary pair of source and target domain instances as input and predicts whether or not they come from the same class, i.e. whether there is a match. We model the posterior probability of a match since it is a sufficient statistic and propose a latent probabilistic model in this context. We develop a joint discriminative learning framework based on dictionary learning to jointly learn the parameters of our model for both domains, which ultimately leads to our class-independent classifier. Many of the existing embedding methods can be viewed as special cases of our probabilistic model. On ZSR our method shows 4.90% improvement over the state-of-the-art in accuracy averaged across four benchmark datasets. We also adapt ZSR method for zero-shot retrieval and show 22.45% improvement accordingly in mean average precision (mAP).", "title": "" }, { "docid": "4608c8ca2cf58ca9388c25bb590a71df", "text": "Life expectancy in most countries has been increasing continually over the several few decades thanks to significant improvements in medicine, public health, as well as personal and environmental hygiene. However, increased life expectancy combined with falling birth rates are expected to engender a large aging demographic in the near future that would impose significant  burdens on the socio-economic structure of these countries. Therefore, it is essential to develop cost-effective, easy-to-use systems for the sake of elderly healthcare and well-being. Remote health monitoring, based on non-invasive and wearable sensors, actuators and modern communication and information technologies offers an efficient and cost-effective solution that allows the elderly to continue to live in their comfortable home environment instead of expensive healthcare facilities. These systems will also allow healthcare personnel to monitor important physiological signs of their patients in real time, assess health conditions and provide feedback from distant facilities. In this paper, we have presented and compared several low-cost and non-invasive health and activity monitoring systems that were reported in recent years. A survey on textile-based sensors that can potentially be used in wearable systems is also presented. Finally, compatibility of several communication technologies as well as future perspectives and research challenges in remote monitoring systems will be discussed.", "title": "" }, { "docid": "eb22a8448b82f6915850fe4d60440b3b", "text": "In story-based games or other interactive systems, a drama manager (DM) is an omniscient agent that acts to bring about a particular sequence of plot points for the player to experience. Traditionally, the DM's narrative evaluation criteria are solely derived from a human designer. We present a DM that learns a model of the player's storytelling preferences and automatically recommends a narrative experience that is predicted to optimize the player's experience while conforming to the human designer's storytelling intentions. Our DM is also capable of manipulating the space of narrative trajectories such that the player is more likely to make choices that result in the recommended experience. Our DM uses a novel algorithm, called prefix-based collaborative filtering (PBCF), that solves the sequential recommendation problem to find a sequence of plot points that maximizes the player's rating of his or her experience. We evaluate our DM in an interactive storytelling environment based on choose-your-own-adventure novels. Our experiments show that our algorithms can improve the player's experience over the designer's storytelling intentions alone and can deliver more personalized experiences than other interactive narrative systems while preserving players' agency.", "title": "" }, { "docid": "3da6fadaf2363545dfd0cea87fe2b5da", "text": "It is a marketplace reality that marketing managers sometimes inflict switching costs on their customers, to inhibit them from defecting to new suppliers. In a competitive setting, such as the Internet market, where competition may be only one click away, has the potential of switching costs as an exit barrier and a binding ingredient of customer loyalty become altered? To address that issue, this article examines the moderating effects of switching costs on customer loyalty through both satisfaction and perceived-value measures. The results, evoked from a Web-based survey of online service users, indicate that companies that strive for customer loyalty should focus primarily on satisfaction and perceived value. The moderating effects of switching costs on the association of customer loyalty and customer satisfaction and perceived value are significant only when the level of customer satisfaction or perceived value is above average. In light of the major findings, the article sets forth strategic implications for customer loyalty in the setting of electronic commerce. © 2004 Wiley Periodicals, Inc. In the consumer marketing community, customer loyalty has long been regarded as an important goal (Reichheld & Schefter, 2000). Both marketing academics and professionals have attempted to uncover the most prominent antecedents of customer loyalty. Numerous studies have Psychology & Marketing, Vol. 21(10):799–822 (October 2004) Published online in Wiley InterScience (www.interscience.wiley.com) © 2004 Wiley Periodicals, Inc. DOI: 10.1002/mar.20030", "title": "" }, { "docid": "ddecb743bc098a3e31ca58bc17810cf1", "text": "Maxout network is a powerful alternate to traditional sigmoid neural networks and is showing success in speech recognition. However, maxout network is prone to overfitting thus regularization methods such as dropout are often needed. In this paper, a stochastic pooling regularization method for max-out networks is proposed to control overfitting. In stochastic pooling, a distribution is produced for each pooling region by the softmax normalization of the piece values. The active piece is selected based on the distribution during training, and an effective probability weighting is conducted during testing. We apply the stochastic pooling maxout (SPM) networks within the DNN-HMM framework and evaluate its effectiveness under a low-resource speech recognition condition. On benchmark test sets, the SPM network yields 4.7-8.6% relative improvements over the baseline maxout network. Further evaluations show the superiority of stochastic pooling over dropout for low-resource speech recognition.", "title": "" }, { "docid": "7c0748301936c39166b9f91ba72d92ef", "text": "methods and native methods are considered to be type safe if they do not override a final method. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(abstract, AccessFlags). methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), member(native, AccessFlags). private methods and static methods are orthogonal to dynamic method dispatch, so they never override other methods (§5.4.5). doesNotOverrideFinalMethod(class('java/lang/Object', L), Method) :isBootstrapLoader(L). doesNotOverrideFinalMethod(Class, Method) :isPrivate(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isStatic(Method, Class). doesNotOverrideFinalMethod(Class, Method) :isNotPrivate(Method, Class), isNotStatic(Method, Class), doesNotOverrideFinalMethodOfSuperclass(Class, Method). doesNotOverrideFinalMethodOfSuperclass(Class, Method) :classSuperClassName(Class, SuperclassName), classDefiningLoader(Class, L), loadedClass(SuperclassName, L, Superclass), classMethods(Superclass, SuperMethodList), finalMethodNotOverridden(Method, Superclass, SuperMethodList). 4.10 Verification of class Files THE CLASS FILE FORMAT 202 final methods that are private and/or static are unusual, as private methods and static methods cannot be overridden per se. Therefore, if a final private method or a final static method is found, it was logically not overridden by another method. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isFinal(Method, Superclass), isStatic(Method, Superclass). If a non-final private method or a non-final static method is found, skip over it because it is orthogonal to overriding. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isPrivate(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isStatic(Method, Superclass), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). THE CLASS FILE FORMAT Verification of class Files 4.10 203 If a non-final, non-private, non-static method is found, then indeed a final method was not overridden. Otherwise, recurse upwards. finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), member(method(_, Name, Descriptor), SuperMethodList), isNotFinal(Method, Superclass), isNotStatic(Method, Superclass), isNotPrivate(Method, Superclass). finalMethodNotOverridden(Method, Superclass, SuperMethodList) :methodName(Method, Name), methodDescriptor(Method, Descriptor), notMember(method(_, Name, Descriptor), SuperMethodList), doesNotOverrideFinalMethodOfSuperclass(Superclass, Method). 4.10 Verification of class Files THE CLASS FILE FORMAT 204 4.10.1.6 Type Checking Methods with Code Non-abstract, non-native methods are type correct if they have code and the code is type correct. methodIsTypeSafe(Class, Method) :doesNotOverrideFinalMethod(Class, Method), methodAccessFlags(Method, AccessFlags), methodAttributes(Method, Attributes), notMember(native, AccessFlags), notMember(abstract, AccessFlags), member(attribute('Code', _), Attributes), methodWithCodeIsTypeSafe(Class, Method). A method with code is type safe if it is possible to merge the code and the stack map frames into a single stream such that each stack map frame precedes the instruction it corresponds to, and the merged stream is type correct. The method's exception handlers, if any, must also be legal. methodWithCodeIsTypeSafe(Class, Method) :parseCodeAttribute(Class, Method, FrameSize, MaxStack, ParsedCode, Handlers, StackMap), mergeStackMapAndCode(StackMap, ParsedCode, MergedCode), methodInitialStackFrame(Class, Method, FrameSize, StackFrame, ReturnType), Environment = environment(Class, Method, ReturnType, MergedCode, MaxStack, Handlers), handlersAreLegal(Environment), mergedCodeIsTypeSafe(Environment, MergedCode, StackFrame). THE CLASS FILE FORMAT Verification of class Files 4.10 205 Let us consider exception handlers first. An exception handler is represented by a functor application of the form: handler(Start, End, Target, ClassName) whose arguments are, respectively, the start and end of the range of instructions covered by the handler, the first instruction of the handler code, and the name of the exception class that this handler is designed to handle. An exception handler is legal if its start (Start) is less than its end (End), there exists an instruction whose offset is equal to Start, there exists an instruction whose offset equals End, and the handler's exception class is assignable to the class Throwable. The exception class of a handler is Throwable if the handler's class entry is 0, otherwise it is the class named in the handler. An additional requirement exists for a handler inside an <init> method if one of the instructions covered by the handler is invokespecial of an <init> method. In this case, the fact that a handler is running means the object under construction is likely broken, so it is important that the handler does not swallow the exception and allow the enclosing <init> method to return normally to the caller. Accordingly, the handler is required to either complete abruptly by throwing an exception to the caller of the enclosing <init> method, or to loop forever. 4.10 Verification of class Files THE CLASS FILE FORMAT 206 handlersAreLegal(Environment) :exceptionHandlers(Environment, Handlers), checklist(handlerIsLegal(Environment), Handlers). handlerIsLegal(Environment, Handler) :Handler = handler(Start, End, Target, _), Start < End, allInstructions(Environment, Instructions), member(instruction(Start, _), Instructions), offsetStackFrame(Environment, Target, _), instructionsIncludeEnd(Instructions, End), currentClassLoader(Environment, CurrentLoader), handlerExceptionClass(Handler, ExceptionClass, CurrentLoader), isBootstrapLoader(BL), isAssignable(ExceptionClass, class('java/lang/Throwable', BL)), initHandlerIsLegal(Environment, Handler). instructionsIncludeEnd(Instructions, End) :member(instruction(End, _), Instructions). instructionsIncludeEnd(Instructions, End) :member(endOfCode(End), Instructions). handlerExceptionClass(handler(_, _, _, 0), class('java/lang/Throwable', BL), _) :isBootstrapLoader(BL). handlerExceptionClass(handler(_, _, _, Name), class(Name, L), L) :Name \\= 0. THE CLASS FILE FORMAT Verification of class Files 4.10 207 initHandlerIsLegal(Environment, Handler) :notInitHandler(Environment, Handler). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isNotInit(Method). notInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method), member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, MethodName, Descriptor), MethodName \\= '<init>'. initHandlerIsLegal(Environment, Handler) :isInitHandler(Environment, Handler), sublist(isApplicableInstruction(Target), Instructions, HandlerInstructions), noAttemptToReturnNormally(HandlerInstructions). isInitHandler(Environment, Handler) :Environment = environment(_Class, Method, _, Instructions, _, _), isInit(Method). member(instruction(_, invokespecial(CP)), Instructions), CP = method(MethodClassName, '<init>', Descriptor). isApplicableInstruction(HandlerStart, instruction(Offset, _)) :Offset >= HandlerStart. noAttemptToReturnNormally(Instructions) :notMember(instruction(_, return), Instructions). noAttemptToReturnNormally(Instructions) :member(instruction(_, athrow), Instructions). 4.10 Verification of class Files THE CLASS FILE FORMAT 208 Let us now turn to the stream of instructions and stack map frames. Merging instructions and stack map frames into a single stream involves four cases: • Merging an empty StackMap and a list of instructions yields the original list of instructions. mergeStackMapAndCode([], CodeList, CodeList). • Given a list of stack map frames beginning with the type state for the instruction at Offset, and a list of instructions beginning at Offset, the merged list is the head of the stack map frame list, followed by the head of the instruction list, followed by the merge of the tails of the two lists. mergeStackMapAndCode([stackMap(Offset, Map) | RestMap], [instruction(Offset, Parse) | RestCode], [stackMap(Offset, Map), instruction(Offset, Parse) | RestMerge]) :mergeStackMapAndCode(RestMap, RestCode, RestMerge). • Otherwise, given a list of stack map frames beginning with the type state for the instruction at OffsetM, and a list of instructions beginning at OffsetP, then, if OffsetP < OffsetM, the merged list consists of the head of the instruction list, followed by the merge of the stack map frame list and the tail of the instruction list. mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], [instruction(OffsetP, Parse) | RestCode], [instruction(OffsetP, Parse) | RestMerge]) :OffsetP < OffsetM, mergeStackMapAndCode([stackMap(OffsetM, Map) | RestMap], RestCode, RestMerge). • Otherwise, the merge of the two lists is undefined. Since the instruction list has monotonically increasing offsets, the merge of the two lists is not defined unless every stack map frame offset has a corresponding instruction offset and the stack map frames are in monotonically ", "title": "" }, { "docid": "1f121c30e686d25f44363f44dc71b495", "text": "In this paper we show that the Euler number of the compactified Jacobian of a rational curve C with locally planar singularities is equal to the multiplicity of the δ-constant stratum in the base of a semi-universal deformation of C. In particular, the multiplicity assigned by Yau, Zaslow and Beauville to a rational curve on a K3 surface S coincides with the multiplicity of the normalisation map in the moduli space of stable maps to S. Introduction Let C be a reduced and irreducible projective curve with singular set Σ ⊂ C and let n : C̃ −→ C be its normalisation. The generalised Jacobian JC of C is an extension of JC̃ by an affine commutative group of dimension δ := dimH0(n∗(OC̃)/OC) = ∑", "title": "" }, { "docid": "d197eacce97d161e4292ba541f8bed57", "text": "A Luenberger-based observer is proposed to the state estimation of a class of nonlinear systems subject to parameter uncertainty and bounded disturbance signals. A nonlinear observer gain is designed in order to minimize the effects of the uncertainty, error estimation and exogenous signals in an 7-L, sense by means of a set of state- and parameterdependent linear matrix inequalities that are solved using standard software packages. A numerical example illustrates the approach.", "title": "" }, { "docid": "e3299737a0fb3cd3c9433f462565b278", "text": "BACKGROUND\nMore than two-thirds of pregnant women experience low-back pain and almost one-fifth experience pelvic pain. The two conditions may occur separately or together (low-back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep.\n\n\nOBJECTIVES\nTo update the evidence assessing the effects of any intervention used to prevent and treat low-back pain, pelvic pain or both during pregnancy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low-back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\n\n\nMAIN RESULTS\nWe included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks' gestation. Fifteen RCTs examined women with low-back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low-back and pelvic pain (participants = 2385). Two studies also investigated low-back pain prevention and four, low-back and pelvic pain prevention. Diagnoses ranged from self-reported symptoms to clinicians' interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect. For low-back painResults from meta-analyses provided low-quality evidence (study design limitations, inconsistency) that any land-based exercise significantly reduced pain (standardised mean difference (SMD) -0.64; 95% confidence interval (CI) -1.03 to -0.25; participants = 645; studies = seven) and functional disability (SMD -0.56; 95% CI -0.89 to -0.23; participants = 146; studies = two). Low-quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low-back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two). For pelvic painResults from a meta-analysis provided low-quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two). For low-back and pelvic painResults from meta-analyses provided moderate-quality evidence (study design limitations) that: an eight- to 12-week exercise program reduced the number of women who reported low-back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land-based exercise, in a variety of formats, significantly reduced low-back and pelvic pain-related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two).The results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate-quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low-back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi-modal intervention (manual therapy, exercise and education) for low-back and pelvic pain.When reported, adverse effects were minor and transient.\n\n\nAUTHORS' CONCLUSIONS\nThere is low-quality evidence that exercise (any exercise on land or in water), may reduce pregnancy-related low-back pain and moderate- to low-quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy-related pelvic pain, and osteomanipulative therapy or a multi-modal intervention (manual therapy, exercise and education) may also be of benefit.Clinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta-analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out.Further evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly.", "title": "" }, { "docid": "00413dc27271c927b8fd67bde63f48eb", "text": "The SEAGULL project aims at the development of intelligent systems to support maritime situation awareness based on unmanned aerial vehicles. It proposes to create an intelligent maritime surveillance system by equipping unmanned aerial vehicles (UAVs) with different types of optical sensors. Optical sensors such as cameras (visible, infrared, multi and hyper spectral) can contribute significantly to the generation of situational awareness of maritime events such as (i) detection and georeferencing of oil spills or hazardous and noxious substances; (ii) tracking systems (e.g. vessels, shipwrecked, lifeboat, debris, etc.); (iii) recognizing behavioral patterns (e.g. vessels rendezvous, high-speed vessels, atypical patterns of navigation, etc.); and (iv) monitoring parameters and indicators of good environmental status. On-board transponders will be used for collision detection and avoidance mechanism (sense and avoid). This paper describes the core of the research and development work done during the first 2 years of the project with particular emphasis on the following topics: system architecture, automatic detection of sea vessels by vision sensors and custom designed computer vision algorithms; and a sense and avoid system developed in the theoretical framework of zero-sum differential games.", "title": "" }, { "docid": "a697f85ad09699ddb38994bd69b11103", "text": "We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.", "title": "" }, { "docid": "e79db51ac85ceafba66dddd5c038fbdf", "text": "Machine learning based anti-phishing techniques are based on various features extracted from different sources. These features differentiate a phishing website from a legitimate one. Features are taken from various sources like URL, page content, search engine, digital certificate, website traffic, etc, of a website to detect it as a phishing or non-phishing. The websites are declared as phishing sites if the heuristic design of the websites matches with the predefined rules. The accuracy of the anti-phishing solution depends on features set, training data and machine learning algorithm. This paper presents a comprehensive analysis of Phishing attacks, their exploitation, some of the recent machine learning based approaches for phishing detection and their comparative study. It provides a better understanding of the phishing problem, current solution space in machine learning domain, and scope of future research to deal with Phishing attacks efficiently using machine learning based approaches.", "title": "" }, { "docid": "936128e89e1c0edec5c0489fa41ba4a2", "text": "Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We introduce a novel class of probabilistic models, comprising an undirected discrete component and a directed hierarchical continuous component, that can be trained efficiently using the variational autoencoder framework. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, OMNIGLOT, and Caltech-101 Silhouettes datasets.", "title": "" }, { "docid": "c6954957e6629a32f9845df15c60be85", "text": "Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of π) contain internal evidence of a nontrivial causal history. We formalize this distinction by defining an object’s “logical depth” as the time required by a standard universal Turing machine to generate it from an input that is algorithmically random (i.e. Martin-Löf random). This definition of depth is shown to be reasonably machineindependent, as well as obeying a slow-growth law: deep objects cannot be quickly produced from shallow ones by any deterministic process, nor with much probability by a probabilistic process, but can be produced slowly. Next we apply depth to the physical problem of “self-organization,” inquiring in particular under what conditions (e.g. noise, irreversibility, spatial and other symmetries of the initial conditions and equations of motion) statistical-mechanical model systems can imitate computers well enough to undergo unbounded increase of depth in the limit of infinite space and time.", "title": "" } ]
scidocsrr
c7424b68be7680bf6e1aef9ec49a024a
Adjustable Real-time Style Transfer
[ { "docid": "b5c8ea776debc32ea2663090eb6f37df", "text": "Neural style transfer has recently received significant attention and demonstrated amazing results. An efficient solution proposed by Johnson et al. trains feed-forward convolutional neural networks by defining and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features often focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at different depths, the resulting images are often unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as different objects becomes obscured. We observe that the depth map effectively reflects the spatial distribution in an image and preserving the depth map of the content image after stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer.", "title": "" }, { "docid": "10ae6cdb445e4faf1e6bed5cad6eb3ba", "text": "It this paper we revisit the fast stylization method introduced in Ulyanov et al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will be made available at https://github.com/DmitryUlyanov/texture_nets.", "title": "" }, { "docid": "8a55bf5b614d750a7de6ac34dc321b10", "text": "Unsupervised image-to-image translation aims at learning the relationship between samples from two image domains without supervised pair information. The relationship between two domain images can be one-to-one, one-to-many or many-to-many. In this paper, we study the one-to-many unsupervised image translation problem in which an input sample from one domain can correspond to multiple samples in the other domain. To learn the complex relationship between the two domains, we introduce an additional variable to control the variations in our one-to-many mapping. A generative model with an XO-structure, called the XOGAN, is proposed to learn the cross domain relationship among the two domains and the additional variables. Not only can we learn to translate between the two image domains, we can also handle the translated images with additional variations. Experiments are performed on unpaired image generation tasks, including edges-to-objects translation and facial image translation. We show that the proposed XOGAN model can generate plausible images and control variations, such as color and texture, of the generated images. Moreover, while state-of-the-art unpaired image generation algorithms tend to generate images with monotonous colors, XOGAN can generate more diverse results.", "title": "" }, { "docid": "344be59c5bb605dec77e4d7bd105d899", "text": "Recently, style transfer has received a lot of attention. While much of this research has aimed at speeding up processing, the approaches are still lacking from a principled, art historical standpoint: a style is more than just a single image or an artist, but previous work is limited to only a single instance of a style or shows no benefit from more images. Moreover, previous work has relied on a direct comparison of art in the domain of RGB images or on CNNs pre-trained on ImageNet, which requires millions of labeled object bounding boxes and can introduce an extra bias, since it has been assembled without artistic consideration. To circumvent these issues, we propose a style-aware content loss, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos. We propose a quantitative measure for evaluating the quality of a stylized image and also have art historians rank patches from our approach against those from previous work. These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.", "title": "" } ]
[ { "docid": "963f97c27adbc7d1136e713247e9a852", "text": "Scheduling in the context of parallel systems is often thought of in terms of assigning tasks in a program to processors, so as to minimize the makespan. This formulation assumes that the processors are dedicated to the program in question. But when the parallel system is shared by a number of users, this is not necessarily the case. In the context of multiprogrammed parallel machines, scheduling refers to the execution of threads from competing programs. This is an operating system issue, involved with resource allocation, not a program development issue. Scheduling schemes for multiprogrammed parallel systems can be classi ed as one or two leveled. Single-level scheduling combines the allocation of processing power with the decision of which thread will use it. Two level scheduling decouples the two issues: rst, processors are allocated to the job, and then the job's threads are scheduled using this pool of processors. The processors of a parallel system can be shared in two basic ways, which are relevant for both one-level and two-level scheduling. One approach is to use time slicing, e.g. when all the processors in the system (or all the processors in the pool) service a global queue of ready threads. The other approach is to use space slicing, and partition the processors statically or dynamically among the di erent jobs. As these approaches are orthogonal to each other, it is also possible to combine them in various ways; for example, this is often done in gang scheduling. Systems using the various approaches are described, and the implications of the di erent mechanisms are discussed. The goals of this survey are to describe the many di erent approaches within a uni ed framework based on the mechanisms used to achieve multiprogramming, and at the same time document commercial systems that have not been described in the open literature.", "title": "" }, { "docid": "409baee7edaec587727624192eab93aa", "text": "It has been widely shown that recognition memory includes two distinct retrieval processes: familiarity and recollection. Many studies have shown that recognition memory can be facilitated when there is a perceptual match between the studied and the tested items. Most event-related potential studies have explored the perceptual match effect on familiarity on the basis of the hypothesis that the specific event-related potential component associated with familiarity is the FN400 (300-500 ms mid-frontal effect). However, it is currently unclear whether the FN400 indexes familiarity or conceptual implicit memory. In addition, on the basis of the findings of a previous study, the so-called perceptual manipulations in previous studies may also involve some conceptual alterations. Therefore, we sought to determine the influence of perceptual manipulation by color changes on recognition memory when the perceptual or the conceptual processes were emphasized. Specifically, different instructions (perceptually or conceptually oriented) were provided to the participants. The results showed that color changes may significantly affect overall recognition memory behaviorally and that congruent items were recognized with a higher accuracy rate than incongruent items in both tasks, but no corresponding neural changes were found. Despite the evident familiarity shown in the two tasks (the behavioral performance of recognition memory was much higher than at the chance level), the FN400 effect was found in conceptually oriented tasks, but not perceptually oriented tasks. It is thus highly interesting that the FN400 effect was not induced, although color manipulation of recognition memory was behaviorally shown, as seen in previous studies. Our findings of the FN400 effect for the conceptual but not perceptual condition support the explanation that the FN400 effect indexes conceptual implicit memory.", "title": "" }, { "docid": "2742db8262616f2b69d92e0066e6930c", "text": "Most of previous work in knowledge base (KB) completion has focused on the problem of relation extraction. In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention. Due to the novelty of this task, we construct a large-scale dataset and design an automatic evaluation methodology. Our knowledge base completion method uses information within the existing KB and external information from Wikipedia. We show that individual methods trained with a global objective that considers unobserved cells from both the entity and the type side gives consistently higher quality predictions compared to baseline methods. We also perform manual evaluation on a small subset of the data to verify the effectiveness of our knowledge base completion methods and the correctness of our proposed automatic evaluation method.", "title": "" }, { "docid": "a91add591aacaa333e109d77576ba463", "text": "It has become essential to scrutinize and evaluate software development methodologies, mainly because of their increasing number and variety. Evaluation is required to gain a better understanding of the features, strengths, and weaknesses of the methodologies. The results of such evaluations can be leveraged to identify the methodology most appropriate for a specific context. Moreover, methodology improvement and evolution can be accelerated using these results. However, despite extensive research, there is still a need for a feature/criterion set that is general enough to allow methodologies to be evaluated regardless of their types. We propose a general evaluation framework which addresses this requirement. In order to improve the applicability of the proposed framework, all the features – general and specific – are arranged in a hierarchy along with their corresponding criteria. Providing different levels of abstraction enables users to choose the suitable criteria based on the context. Major evaluation frameworks for object-oriented, agent-oriented, and aspect-oriented methodologies have been studied and assessed against the proposed framework to demonstrate its reliability and validity.", "title": "" }, { "docid": "00e60176eca7d86261c614196849a946", "text": "This paper proposes a novel low-profile dual polarized antenna for 2.4 GHz application. The proposed antenna consists of a circular patch with four curved T-stubs and a differential feeding network. Due to the parasitic loading of the curved T-stubs, the bandwidth has been improved. Good impedance matching and dual-polarization with low cross polarization have been achieved within 2.4–2.5 GHz, which is sufficient for WLAN application. The total thickness of the antenna is only 0.031A,o, which is low-profile when compared with its counterparts.", "title": "" }, { "docid": "b5dc56272d4dea04b756a8614d6762c9", "text": "Platforms have been considered as a paradigm for managing new product development and innovation. Since their introduction, studies on platforms have introduced multiple conceptualizations, leading to a fragmentation of research and different perspectives. By systematically reviewing the platform literature and combining bibliometric and content analyses, this paper examines the platform concept and its evolution, proposes a thematic classification, and highlights emerging trends in the literature. Based on this hybrid methodological approach (bibliometric and content analyses), the results show that platform research has primarily focused on issues that are mainly related to firms' internal aspects, such as innovation, modularity, commonality, and mass customization. Moreover, scholars have recently started to focus on new research themes, including managerial questions related to capability building, strategy, and ecosystem building based on platforms. As its main contributions, this paper improves the understanding of and clarifies the evolutionary trajectory of the platform concept, and identifies trends and emerging themes to be addressed in future studies.", "title": "" }, { "docid": "35d7da09017c0a6a40bf90bd2e7ea5fc", "text": "Cloud computing promises a radical shift in the provisioning of computing resource within enterprise. This paper: i) describes the challenges that decision-makers face when attempting to determine the feasibility of the adoption of cloud computing in their organisations; ii) illustrates a lack of existing work to address the feasibility challenges of cloud adoption in enterprise; iii) introduces the Cloud Adoption Toolkit that provides a framework to support decision-makers in identifying their concerns, and matching these concerns to appropriate tools/techniques that can be used to address them. The paper adopts a position paper methodology such that case study evidence is provided, where available, to support claims. We conclude that the Cloud Adoption Toolkit, whilst still under development, shows signs that it is a useful tool for decision-makers as it helps address the feasibility challenges of cloud adoption in enterprise.", "title": "" }, { "docid": "8fb1386af94abb9cacda76861680effd", "text": "This paper focuses on the development of a front- and rear-wheel-independent drive-type electric vehicle (EV) (FRID EV) as a next-generation EV. The ideal characteristics of a FRID EV promote good performance and safety and are the result of structural features that independently control the driving and braking torques of the front and rear wheels. The first characteristic is the failsafe function. This function enables vehicles to continue running without any unexpected or sudden stops, even if one of the propulsion systems fails. The second characteristic is a function that performs efficient acceleration and deceleration on all road surfaces. This function works by distributing the driving or braking torques to the front and rear wheels, taking into consideration load movement. The third characteristic ensures that the vehicle runs safely on roads with a low friction coefficient (μ), such as icy roads. In this paper, we propose a driving torque distribution method when cornering and a braking torque distribution method; these methods are related to the third characteristic, and they are particularly effective when driving on roads with ultralow μ. We verify the effectiveness of the proposed torque control methods through simulations and experiments on the ultralow-μ road surface with a μ of 0.1.", "title": "" }, { "docid": "5ebd4fc7ee26a8f831f7fea2f657ccdd", "text": "1 This article was reviewed and accepted by all the senior editors, including the editor-in-chief. Articles published in future issues will be accepted by just a single senior editor, based on reviews by members of the Editorial Board. 2 Sincere thanks go to Anna Dekker and Denyse O’Leary for their assistance with this research. Funding was generously provided by the Advanced Practices Council of the Society for Information Management and by the Social Sciences and Humanities Research Council of Canada. An earlier version of this manuscript was presented at the Academy of Management Conference in Toronto, Canada, in August 2000. 3 In this article, the terms information systems (IS) and information technology (IT) are used interchangeably. 4 Regardless of whether IS services are provided internally (in a centralized, decentralized, or federal manner) or are outsourced, we assume the boundaries of the IS function can be identified. Thus, the fit between the unit(s) providing IS services and the rest of the organization can be examined. and books have been written on the subject, firms continue to demonstrate limited alignment.", "title": "" }, { "docid": "06bba1f9f57b7b452af47321ac8fa358", "text": "Little is known about the genetic changes that distinguish domestic cat populations from their wild progenitors. Here we describe a high-quality domestic cat reference genome assembly and comparative inferences made with other cat breeds, wildcats, and other mammals. Based upon these comparisons, we identified positively selected genes enriched for genes involved in lipid metabolism that underpin adaptations to a hypercarnivorous diet. We also found positive selection signals within genes underlying sensory processes, especially those affecting vision and hearing in the carnivore lineage. We observed an evolutionary tradeoff between functional olfactory and vomeronasal receptor gene repertoires in the cat and dog genomes, with an expansion of the feline chemosensory system for detecting pheromones at the expense of odorant detection. Genomic regions harboring signatures of natural selection that distinguish domestic cats from their wild congeners are enriched in neural crest-related genes associated with behavior and reward in mouse models, as predicted by the domestication syndrome hypothesis. Our description of a previously unidentified allele for the gloving pigmentation pattern found in the Birman breed supports the hypothesis that cat breeds experienced strong selection on specific mutations drawn from random bred populations. Collectively, these findings provide insight into how the process of domestication altered the ancestral wildcat genome and build a resource for future disease mapping and phylogenomic studies across all members of the Felidae.", "title": "" }, { "docid": "cc4e8c21e58a8b26bf901b597d0971d8", "text": "Pedestrian detection and semantic segmentation are high potential tasks for many real-time applications. However most of the top performing approaches provide state of art results at high computational costs. In this work we propose a fast solution for achieving state of art results for both pedestrian detection and semantic segmentation. As baseline for pedestrian detection we use sliding windows over cost efficient multiresolution filtered LUV+HOG channels. We use the same channels for classifying pixels into eight semantic classes. Using short range and long range multiresolution channel features we achieve more robust segmentation results compared to traditional codebook based approaches at much lower computational costs. The resulting segmentations are used as additional semantic channels in order to achieve a more powerful pedestrian detector. To also achieve fast pedestrian detection we employ a multiscale detection scheme based on a single flexible pedestrian model and a single image scale. The proposed solution provides competitive results on both pedestrian detection and semantic segmentation benchmarks at 8 FPS on CPU and at 15 FPS on GPU, being the fastest top performing approach.", "title": "" }, { "docid": "e462c0cfc1af657cb012850de1b7b717", "text": "ASSOCIATIONS BETWEEN PHYSICAL ACTIVITY, PHYSICAL FITNESS, AND FALLS RISK IN HEALTHY OLDER INDIVIDUALS Christopher Deane Vaughan Old Dominion University, 2016 Chair: Dr. John David Branch Objective: The purpose of this study was to assess relationships between objectively measured physical activity, physical fitness, and the risk of falling. Methods: A total of n=29 subjects completed the study, n=15 male and n=14 female age (mean±SD)= 70± 4 and 71±3 years, respectively. In a single testing session, subjects performed pre-post evaluations of falls risk (Short-from PPA) with a 6-minute walking intervention between the assessments. The falls risk assessment included tests of balance, knee extensor strength, proprioception, reaction time, and visual contrast. The sub-maximal effort 6-minute walking task served as an indirect assessment of cardiorespiratory fitness. Subjects traversed a walking mat to assess for variation in gait parameters during the walking task. Additional center of pressure (COP) balance measures were collected via forceplate during the falls risk assessments. Subjects completed a Modified Falls Efficacy Scale (MFES) falls confidence survey. Subjects’ falls histories were also collected. Subjects wore hip mounted accelerometers for a 7-day period to assess time spent in moderate to vigorous physical activity (MVPA). Results: Males had greater body mass and height than females (p=0.001, p=0.001). Males had a lower falls risk than females at baseline (p=0.043) and post-walk (p=0.031). MFES scores were similar among all subjects (Median = 10). Falls history reporting revealed; fallers (n=8) and non-fallers (n=21). No significant relationships were found between main outcome measures of MVPA, cardiorespiratory fitness, or falls risk. Fallers had higher knee extensor strength than non-fallers at baseline (p=0.028) and post-walk (p=0.011). Though not significant (p=0.306), fallers spent 90 minutes more time in MVPA than non-fallers (427.8±244.6 min versus 335.7±199.5). Variations in gait and COP variables were not significant. Conclusions: This study found no apparent relationship between objectively measured physical activity, indirectly measured cardiorespiratory fitness, and falls risk.", "title": "" }, { "docid": "2fb92e88ecbf2937b3b08a9f8de34618", "text": "The area of image captioning i.e. the automatic generation of short textual descriptions of images has experienced much progress recently. However, image captioning approaches often only focus on describing the content of the image without any emotional or sentimental dimension which is common in human captions. This paper presents an approach for image captioning designed specifically to incorporate emotions and feelings into the caption generation process. The presented approach consists of a Deep Convolutional Neural Network (CNN) for detecting Adjective Noun Pairs in the image and a novel graphical network architecture called \"Concept And Syntax Transition (CAST)\" network for generating sentences from these detected concepts.", "title": "" }, { "docid": "9ca12c5f314d077093753dc0f3ff9cd5", "text": "We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning — answering image-related questions which require a multi-step, high-level process — a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-theart error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.", "title": "" }, { "docid": "d91e11127e0d665b859420a534288516", "text": "In most cases, the story of popular RPG games is designed by professional designers as a main content. However, manual design of game content has limitation in the quantitative aspect. Manual story generation requires a large amount of time and effort. Because gamers want more diverse and rich content, so it is not easy to satisfy the needs with manual design. PCG (Procedural Content Generation) is to automatically generate the content of the game. In this paper, we propose a quest generation engine using Petri net planning. As a combination of Petri-net modules a quest, a quest plot is created. The proposed method is applied to a commercial game platform to show the feasibility.", "title": "" }, { "docid": "27329c67322a5ed2c4f2a7dd6ceb79a8", "text": "In the world’s largest-ever deployment of online voting, the iVote Internet voting system was trusted for the return of 280,000 ballots in the 2015 state election in New South Wales, Australia. During the election, we performed an independent security analysis of parts of the live iVote system and uncovered severe vulnerabilities that could be leveraged to manipulate votes, violate ballot privacy, and subvert the verification mechanism. These vulnerabilities do not seem to have been detected by the election authorities before we disclosed them, despite a preelection security review and despite the system having run in a live state election for five days. One vulnerability, the result of including analytics software from an insecure external server, exposed some votes to complete compromise of privacy and integrity. At least one parliamentary seat was decided by a margin much smaller than the number of votes taken while the system was vulnerable. We also found fundamental protocol flaws, including vote verification that was itself susceptible to manipulation. This incident underscores the difficulty of conducting secure elections online and carries lessons for voters, election officials, and the e-voting research community.", "title": "" }, { "docid": "c175910d1809ad6dc073f79e4ca15c0c", "text": "The Global Positioning System (GPS) double-difference carrier-phase data are biased by an integer number of cycles. In this contribution a new method is introduced that enables very fast integer least-squares estimation of the ambiguities. The method makes use of an ambiguity transformation that allows one to reformulate the original ambiguity estimation problem as a new problem that is much easier to solve. The transformation aims at decorrelating the least-squares ambiguities and is based on an integer approximation of the conditional least-squares transformation. And through a flattening of the typical discontinuity in the GPS-spectrum of conditional variances of the ambiguities, the transformation returns new ambiguities that show a dramatic improvement in precision in comparison with the original double-difference ambiguities.", "title": "" }, { "docid": "1c177a7fdbd15e04a6b122a284a9014a", "text": "Malicious software installed on infected computers is a fundamental component of online crime. Malware development thus plays an essential role in the underground economy of cyber-crime. Malware authors regularly update their software to defeat defenses or to support new or improved criminal business models. A large body of research has focused on detecting malware, defending against it and identifying its functionality. In addition to these goals, however, the analysis of malware can provide a glimpse into the software development industry that develops malicious code.\n In this work, we present techniques to observe the evolution of a malware family over time. First, we develop techniques to compare versions of malicious code and quantify their differences. Furthermore, we use behavior observed from dynamic analysis to assign semantics to binary code and to identify functional components within a malware binary. By combining these techniques, we are able to monitor the evolution of a malware's functional components. We implement these techniques in a system we call Beagle, and apply it to the observation of 16 malware strains over several months. The results of these experiments provide insight into the effort involved in updating malware code, and show that Beagle can identify changes to individual malware components.", "title": "" }, { "docid": "04a4996eb5be0d321037cac5cb3c1ad6", "text": "Repeated retrieval enhances long-term retention, and spaced repetition also enhances retention. A question with practical and theoretical significance is whether there are particular schedules of spaced retrieval (e.g., gradually expanding the interval between tests) that produce the best learning. In the present experiment, subjects studied and were tested on items until they could recall each one. They then practiced recalling the items on 3 repeated tests that were distributed according to one of several spacing schedules. Increasing the absolute (total) spacing of repeated tests produced large effects on long-term retention: Repeated retrieval with long intervals between each test produced a 200% improvement in long-term retention relative to repeated retrieval with no spacing between tests. However, there was no evidence that a particular relative spacing schedule (expanding, equal, or contracting) was inherently superior to another. Although expanding schedules afforded a pattern of increasing retrieval difficulty across repeated tests, this did not translate into gains in long-term retention. Repeated spaced retrieval had powerful effects on retention, but the relative schedule of repeated tests had no discernible impact.", "title": "" } ]
scidocsrr
b9a2345fa6d0740625baf845a07488d4
Diagonal principal component analysis for face recognition
[ { "docid": "94b84ed0bb69b6c4fc7a268176146eea", "text": "We consider the problem of representing image matrices with a set of basis functions. One common solution for that problem is to first transform the 2D image matrices into 1D image vectors and then to represent those 1D image vectors with eigenvectors, as done in classical principal component analysis. In this paper, we adopt a natural representation for the 2D image matrices using eigenimages, which are 2D matrices with the same size of original images and can be directly computed from original 2D image matrices. We discuss how to compute those eigenimages effectively. Experimental result on ORL image database shows the advantages of eigenimages method in representing the 2D images.", "title": "" } ]
[ { "docid": "41eab64d00f1a4aaea5c5899074d91ca", "text": "Informally described design patterns are useful for communicating proven solutions for recurring design problems to developers, but they cannot be used as compliance points against which solutions that claim to conform to the patterns are checked. Pattern specification languages that utilize mathematical notation provide the needed formality, but often at the expense of usability. We present a rigorous and practical technique for specifying pattern solutions expressed in the unified modeling language (UML). The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models. The technique has been used to create specifications of solutions for several popular design patterns. We illustrate the use of the technique by specifying observer and visitor pattern solutions.", "title": "" }, { "docid": "f9fd7fc57dfdfbfa6f21dc074c9e9daf", "text": "Recently, Lin and Tsai proposed an image secret sharing scheme with steganography and authentication to prevent participants from the incidental or intentional provision of a false stego-image (an image containing the hidden secret image). However, dishonest participants can easily manipulate the stego-image for successful authentication but cannot recover the secret image, i.e., compromise the steganography. In this paper, we present a scheme to improve authentication ability that prevents dishonest participants from cheating. The proposed scheme also defines the arrangement of embedded bits to improve the quality of stego-image. Furthermore, by means of the Galois Field GF(2), we improve the scheme to a lossless version without additional pixels. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "dc8ffc5fd84b3af4cc88d75f7bc88f77", "text": "Digital crimes is big problem due to large numbers of data access and insufficient attack analysis techniques so there is the need for improvements in existing digital forensics techniques. With growing size of storage capacity these digital forensic investigations are getting more difficult. Visualization allows for displaying large amounts of data at once. Integrated visualization of data distribution bars and rules, visualization of behaviour and comprehensive analysis, maps allow user to analyze different rules and data at different level, with any kind of anomaly in data. Data mining techniques helps to improve the process of visualization. These papers give comprehensive review on various visualization techniques with various anomaly detection techniques.", "title": "" }, { "docid": "9813df16b1852cf6d843ff3e1c67fa88", "text": "Traumatic neuromas are tumors resulting from hyperplasia of axons and nerve sheath cells after section or injury to the nervous tissue. We present a case of this tumor, confirmed by anatomopathological examination, in a male patient with history of circumcision. Knowledge of this entity is very important in achieving the differential diagnosis with other lesions that affect the genital area such as condyloma acuminata, bowenoid papulosis, lichen nitidus, sebaceous gland hyperplasia, achrochordon and pearly penile papules.", "title": "" }, { "docid": "534609ce9b008555cf433ba20b02fb4a", "text": "VHPOP is a partial order causal link (POCL) planner loosely based on UCPOP. It draws from the experience gained in the early to mid 1990’s on flaw selection strategies for POCL planning, and combines this with more recent developments in the field of domain independent planning such as distance based heuristics and reachability analysis. We present an adaptation of the additive heuristic for plan space planning, and modify it to account for possible reuse of existing actions in a plan. We also propose a large set of novel flaw selection strategies, and show how these can help us solve more problems than previously possible by POCL planners. VHPOP also supports planning with durative actions by incorporating standard techniques for temporal constraint reasoning. We demonstrate that the same heuristic techniques used to boost the performance of classical POCL planning can be effective in domains with durative actions as well. The result is a versatile heuristic POCL planner competitive with established CSP-based and heuristic state space planners.", "title": "" }, { "docid": "8c8a100e4dc69e1e68c2bd55f010656d", "text": "In this paper, a data hiding scheme by simple LSB substitution is proposed. By applying an optimal pixel adjustment process to the stego-image obtained by the simple LSB substitution method, the image quality of the stego-image can be greatly improved with low extra computational complexity. The worst case mean-square-error between the stego-image and the cover-image is derived. Experimental results show that the stego-image is visually indistinguishable from the original cover-image. The obtained results also show a signi7cant improvement with respect to a previous work. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cb2df8e27a3c284028d0fbb86652ae14", "text": "The large bulk of packets/flows in future core networks will require a highly efficient header processing in the switching elements. Simplifying lookup in core network switching elements is capital to transport data at high rates and with low latency. Flexible network hardware combined with agile network control is also an essential property for future software-defined networking. We argue that only further decoupling between the control and data planes will unlock the flexibility and agility in SDN for the design of new network solutions for core networks. This article proposes a new approach named KeyFlow to build a flexible network-fabricbased model. It replaces the table lookup in the forwarding engine by elementary operations relying on a residue number system. This provides us tools to design a stateless core network by still using OpenFlow centralized control. A proof of concept prototype is validated using the Mininet emulation environment and OpenFlow 1.0. The results indicate RTT reduction above 50 percent, especially for networks with densely populated flow tables. KeyFlow achieves above 30 percent reduction in keeping active flow state in the network.", "title": "" }, { "docid": "00a48b2c053c5d634a3480c1543cb3d2", "text": "Interruptions and distractions due to smartphone use in healthcare settings pose potential risks to patient safety. Therefore, it is important to assess smartphone use at work, to encourage nursing students to review their relevant behaviors, and to recognize these potential risks. This study's aim was to develop a scale to measure smartphone addiction and test its validity and reliability. We investigated nursing students' experiences of distractions caused by smartphones in the clinical setting and their opinions about smartphone use policies. Smartphone addiction and the need for a scale to measure it were identified through a literature review and in-depth interviews with nursing students. This scale showed reliability and validity with exploratory and confirmatory factor analysis. In testing the discriminant and convergent validity of the selected (18) items with four factors, the smartphone addiction model explained approximately 91% (goodness-of-fit index = 0.909) of the variance in the data. Pearson correlation coefficients among addiction level, distractions in the clinical setting, and attitude toward policies on smartphone use were calculated. Addiction level and attitude toward policies of smartphone use were negatively correlated. This study suggests that healthcare organizations in Korea should create practical guidelines and policies for the appropriate use of smartphones in clinical practice.", "title": "" }, { "docid": "42b0c0c340cfb49e1eb7c07e8f251f94", "text": "The fisheries sector in the course of the last three decades have been transformed from a developed country to a developing country dominance. Aquaculture, the farming of waters, though a millennia old tradition during this period has become a significant contributor to food fish production, currently accounting for nearly 50 % of global food fish consumption; in effect transforming our dependence from a hunted to a farmed supply as for all our staple food types. Aquaculture and indeed the fisheries sector as a whole is predominated in the developing countries, and accordingly the development strategies adopted by the sector are influenced by this. Aquaculture also being a newly emerged food production sector has being subjected to an increased level of public scrutiny, and one of the most contentious aspects has been its impacts on biodiversity. In this synthesis an attempt is made to assess the impacts of aquaculture on biodiversity. Instances of major impacts on biodiversity conservation arising from aquaculture, such as land use, effluent discharge, effects on wild populations, alien species among others are highlighted and critically examined. The influence of paradigm changes in development strategies and modern day market forces have begun to impact on aquaculture developments. Consequently, improvements in practices and adoption of more environmentally friendly approaches that have a decreasing negative influence on biodiversity conservation are highlighted. An attempt is also made to demonstrate direct and or indirect benefits of aquaculture, such as through being a substitute to meet human needs for food, particularly over-exploited and vulnerable fish stocks, and for other purposes (e.g. medicinal ingredients), on biodiversity conservation, often a neglected entity.", "title": "" }, { "docid": "611f7b5564c9168f73f778e7466d1709", "text": "A fold-back current-limit circuit, with load-insensitive quiescent current characteristic for CMOS low dropout regulator (LDO), is proposed in this paper. This method has been designed in 0.35 µm CMOS technology and verified by Hspice simulation. The quiescent current of the LDO is 5.7 µA at 100-mA load condition. It is only 2.2% more than it in no-load condition, 5.58 µA. The maximum current limit is set to be 197 mA, and the short-current limit is 77 mA. Thus, the power consumption can be saved up to 61% at the short-circuit condition, which also decreases the risk of damaging the power transistor. Moreover, the thermal protection can be simplified and the LDO will be more reliable.", "title": "" }, { "docid": "fccbcdff722a297e5a389674d7557a18", "text": "For the last few decades more than twenty standardized usability questionnaires for evaluating software systems have been proposed. These instruments have been widely used in the assessment of usability of user interfaces. They have their own characteristics, can be generic or address specific kinds of systems and can be composed of one or several items. Some comparison or comparative studies were also conducted to identify the best one in different situations. All these issues should be considered while choosing a questionnaire. In this paper, we present an extensive review of these questionnaires considering their key features, some classifications and main comparison studies already performed. Moreover, we present the result of a detailed analysis of all items being evaluated in each questionnaire to indicate those that can identify users’ perceptions about specific usability problems. This analysis was performed by confronting each questionnaire item (around 475 items) with usability criteria proposed by quality standards (ISO 9421-11 and ISO/WD 9241-112) and classical quality ergonomic criteria.", "title": "" }, { "docid": "766b18cdae33d729d21d6f1b2b038091", "text": "1.1 Terminology Intercultural communication or communication between people of different cultural backgrounds has always been and will probably remain an important precondition of human co-existance on earth. The purpose of this paper is to provide a framework of factors thatare important in intercultural communication within a general model of human, primarily linguistic, communication. The term intercultural is chosen over the largely synonymousterm cross-cultural because it is linked to language use such as “interdisciplinary”, that is cooperation between people with different scientific backgrounds. Perhaps the term also has somewhat fewer connotations than crosscultural. It is not cultures that communicate, whatever that might imply, but people (and possibly social institutions) with different cultural backgrounds that do. In general, the term”cross-cultural” is probably best used for comparisons between cultures (”crosscultural comparison”).", "title": "" }, { "docid": "b8875516c3ccf633eb174c94112f436d", "text": "In an attempt to mimic everyday activities that are performed in 3-dimensional environments, exercise programs have been designed to integrate training of the trunk muscles with training of the extremities. Many believe that the most effective way to recruit the core stabilizing muscles is to execute traditional exercise movements on unstable surfaces. However, physical activity is rarely performed with a stable load on an unstable surface; usually, the surface is stable, and the external resistance is not. The purpose of this study was to evaluate muscle activity of the prime movers and core stabilizers while lifting stable and unstable loads on stable and unstable surfaces during the seated overhead shoulder press exercise. Thirty resistance-trained subjects performed the shoulder press exercise for 3 sets of 3 repetitions under 2 load (barbell and dumbbell) and 2 surface (exercise bench and Swiss ball) conditions at a 10 repetition maximum relative intensity. Surface electromyography (EMG) measured muscle activity for 8 muscles (anterior deltoid, middle deltoid, trapezius, triceps brachii, rectus abdominis, external obliques, and upper and lower erector spinae). The average root mean square of the EMG signal was calculated for each condition. The results showed that as the instability of the exercise condition increased, the external load decreased. Triceps activation increased with external resistance, where the barbell/bench condition had the greatest EMG activation and the dumbbell/Swiss ball condition had the least. The upper erector spinae had greater muscle activation when performing the barbell presses on the Swiss ball vs. the bench. The findings provide little support for training with a lighter load using unstable loads or unstable surfaces.", "title": "" }, { "docid": "eba9ec47b04e08ff2606efa9ffebb6f8", "text": "OBJECTIVE\nThe incidence of neuroleptic malignant syndrome (NMS) is not known, but the frequency of its occurrence with conventional antipsychotic agents has been reported to vary from 0.02% to 2.44%.\n\n\nDATA SOURCES\nMEDLINE search conducted in January 2003 and review of references within the retrieved articles.\n\n\nDATA SYNTHESIS\nOur MEDLINE research yielded 68 cases (21 females and 47 males) of NMS associated with atypical antipsychotic drugs (clozapine, N = 21; risperidone, N = 23; olanzapine, N = 19; and quetiapine, N = 5). The fact that 21 cases of NMS with clozapine were found indicates that low occurrence of extrapyramidal symptoms (EPS) and low EPS-inducing potential do not prevent the occurrence of NMS and D(2) dopamine receptor blocking potential does not have direct correlation with the occurrence of NMS. One of the cardinal features of NMS is an increasing manifestation of EPS, and the conventional antipsychotic drugs are known to produce EPS in 95% or more of NMS cases. With atypical antipsychotic drugs, the incidence of EPS during NMS is of a similar magnitude.\n\n\nCONCLUSIONS\nFor NMS associated with atypical antipsychotic drugs, the mortality rate was lower than that with conventional antipsychotic drugs. However, the mortality rate may simply be a reflection of physicians' awareness and ensuing early treatment.", "title": "" }, { "docid": "62a0b14c86df32d889d43eb484eadcda", "text": "Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) classification. Most of existing CSP-based methods exploit covariance matrices on a subject-by-subject basis so that inter-subject information is neglected. In this paper we present modifications of CSP for subject-to-subject transfer, where we exploit a linear combination of covariance matrices of subjects in consideration. We develop two methods to determine a composite covariance matrix that is a weighted sum of covariance matrices involving subjects, leading to composite CSP. Numerical experiments on dataset IVa in BCI competition III confirm that our composite CSP methods improve classification performance over the standard CSP (on a subject-by-subject basis), especially in the case of subjects with fewer number of training samples.", "title": "" }, { "docid": "2eafdf2c8f1324090cee1a141a2488e7", "text": "Understanding recurrent networks through rule extraction has a long history. This has taken on new interests due to the need for interpreting or verifying neural networks. One basic form for representing stateful rules is deterministic finite automata (DFA). Previous research shows that extracting DFAs from trained second-order recurrent networks is not only possible but also relatively stable. Recently, several new types of recurrent networks with more complicated architectures have been introduced. These handle challenging learning tasks usually involving sequential data. However, it remains an open problem whether DFAs can be adequately extracted from these models. Specifically, it is not clear how DFA extraction will be affected when applied to different recurrent networks trained on data sets with different levels of complexity. Here, we investigate DFA extraction on several widely adopted recurrent networks that are trained to learn a set of seven regular Tomita grammars. We first formally analyze the complexity of Tomita grammars and categorize these grammars according to that complexity. Then we empirically evaluate different recurrent networks for their performance of DFA extraction on all Tomita grammars. Our experiments show that for most recurrent networks, their extraction performance decreases as the complexity of the underlying grammar increases. On grammars of lower complexity, most recurrent networks obtain desirable extraction performance. As for grammars with the highest level of complexity, while several complicated models fail with only certain recurrent networks having satisfactory extraction performance.", "title": "" }, { "docid": "48dbd48a531867486b2d018442f64ebb", "text": "The purpose of this paper is to analyze the extent to which the use of social media can support customer knowledge management (CKM) in organizations relying on a traditional bricks-and-mortar business model. The paper uses a combination of qualitative case study and netnography on Starbucks, an international coffee house chain. Data retrieved from varied sources such as newspapers, newswires, magazines, scholarly publications, books, and social media services were textually analyzed. Three major findings could be culled from the paper. First, Starbucks deploys a wide range of social media tools for CKM that serve as effective branding and marketing instruments for the organization. Second, Starbucks redefines the roles of its customers through the use of social media by transforming them from passive recipients of beverages to active contributors of innovation. Third, Starbucks uses effective strategies to alleviate customers’ reluctance for voluntary knowledge sharing, thereby promoting engagement in social media. The scope of the paper is limited by the window of the data collection period. Hence, the findings should be interpreted in the light of this constraint. The lessons gleaned from the case study suggest that social media is not a tool exclusive to online businesses. It can be a potential game-changer in supporting CKM efforts even for traditional businesses. This paper represents one of the earliest works that analyzes the use of social media for CKM in an organization that relies on a traditional bricks-and-mortar business model.", "title": "" }, { "docid": "cb9ba3aaafccae2cd7ea5e32479d2099", "text": "Partial least squares-based structural equation modeling (PLS-SEM) is extensively used in the field of information systems, as well as in many other fields where multivariate statistical methods are employed. One of the most fundamental issues in PLS-SEM is that of minimum sample size estimation. The “10-times rule” has been a favorite due to its simplicity of application, even though it tends to yield imprecise estimates. We propose two related methods, based on mathematical equations, as alternatives for minimum sample size estimation in PLSSEM: the inverse square root method, and the gamma-exponential method. Based on three Monte Carlo experiments, we demonstrate that both methods are fairly accurate. The inverse square root method is particularly attractive in terms of its simplicity of application.", "title": "" }, { "docid": "9327ab4f9eba9a32211ddb39463271b1", "text": "We investigate techniques for visualizing time series data and evaluate their effect in value comparison tasks. We compare line charts with horizon graphs - a space-efficient time series visualization technique - across a range of chart sizes, measuring the speed and accuracy of subjects' estimates of value differences between charts. We identify transition points at which reducing the chart height results in significantly differing drops in estimation accuracy across the compared chart types, and we find optimal positions in the speed-accuracy tradeoff curve at which viewers performed quickly without attendant drops in accuracy. Based on these results, we propose approaches for increasing data density that optimize graphical perception.", "title": "" }, { "docid": "94fbd5c6f1347bb04ab8d9f6e768f8df", "text": "(3) because ‖(xa,va)‖2 ≤ L and ηt only has a finite variance. For the first term on the right-hand side in Eq (2), if the regularization parameter λ1 is sufficiently large, the Hessian matrix of the loss function specified in the paper is positive definite at the optimizer based on the property of alternating least square (Uschmajew 2012). The estimation of Θ and va is thus locally q-linearly convergent to the optimizer. This indicates that for every 1 > 0, we have, ‖v̂a,t+1 − v a‖2 ≤ (q1 + 1)‖v̂a,t − v a‖2 (4) where 0 < q1 < 1. As a conclusion, we have for any δ > 0, with probability at least 1− δ,", "title": "" } ]
scidocsrr
4f7def054e9928937bb4e2a827dc1821
Rendering Subdivision Surfaces using Hardware Tessellation
[ { "docid": "5d9ed198f35312988a4b823c79ebb3a4", "text": "A quadtree algorithm is developed to triangulate deformed, intersecting parametric surfaces. The biggest problem with adaptive sampling is to guarantee that the triangulation is accurate within a given tolerance. A new method guarantees the accuracy of the triangulation, given a \"Lipschitz\" condition on the surface definition. The method constructs a hierarchical set of bounding volumes for the surface, useful for ray tracing and solid modeling operations. The task of adaptively sampling a surface is broken into two parts: a subdivision mechanism for recursively subdividing a surface, and a set of subdivision criteria for controlling the subdivision process.An adaptive sampling technique is said to be robust if it accurately represents the surface being sampled. A new type of quadtree, called a restricted quadtree, is more robust than the traditional unrestricted quadtree at adaptive sampling of parametric surfaces. Each sub-region in the quadtree is half the width of the previous region. The restricted quadtree requires that adjacent regions be the same width within a factor of two, while the traditional quadtree makes no restriction on neighbor width. Restricted surface quadtrees are effective at recursively sampling a parametric surface. Quadtree samples are concentrated in regions of high curvature, and along intersection boundaries, using several subdivision criteria. Silhouette subdivision improves the accuracy of the silhouette boundary when a viewing transformation is available at sampling time. The adaptive sampling method is more robust than uniform sampling, and can be more efficient at rendering deformed, intersecting parametric surfaces.", "title": "" }, { "docid": "9c2e89bad3ca7b7416042f95bf4f4396", "text": "We present a simple and computationally efficient algorithm for approximating Catmull-Clark subdivision surfaces using a minimal set of bicubic patches. For each quadrilateral face of the control mesh, we construct a geometry patch and a pair of tangent patches. The geometry patches approximate the shape and silhouette of the Catmull-Clark surface and are smooth everywhere except along patch edges containing an extraordinary vertex where the patches are C0. To make the patch surface appear smooth, we provide a pair of tangent patches that approximate the tangent fields of the Catmull-Clark surface. These tangent patches are used to construct a continuous normal field (through their cross-product) for shading and displacement mapping. Using this bifurcated representation, we are able to define an accurate proxy for Catmull-Clark surfaces that is efficient to evaluate on next-generation GPU architectures that expose a programmable tessellation unit.", "title": "" } ]
[ { "docid": "90fa2211106f4a8e23c5a9c782f1790e", "text": "Page layout is dominant in many genres of physical documents, but it is frequently overlooked when texts are digitised. Its presence is largely determined by available technologies and skills: If no provision is made for creating, preserving, or describing layout, then it tends not to be created, preserved or described. However, I argue, the significance and utility of layout for readers is such that it will survive or re-emerge. I review how layout has been treated in the literature of graphic design and linguistics, and consider its role as a memory tool. I distinguish between fixed, flowed, fugitive and fragmented pages, determined not only by authorial intent but also by technical constraints. Finally, I describe graphic literacy as a component of functional literacy and suggest that corresponding graphic literacies are needed not only by readers, but by creators of documents and by the information management technologies that produce, deliver, and store them.", "title": "" }, { "docid": "327a681898f6f39ae98321643e06fba1", "text": "Adversarial training (AT) is a regularization method that can be used to improve the robustness of neural network methods by adding small perturbations in the training data. We show how to use AT for the tasks of entity recognition and relation extraction. In particular, we demonstrate that applying AT to a general purpose baseline model for jointly extracting entities and relations, allows improving the stateof-the-art effectiveness on several datasets in different contexts (i.e., news, biomedical, and real estate data) and for different languages (English and Dutch).", "title": "" }, { "docid": "297d95a81658b3d50bf3aff5bcbf7047", "text": "In this paper, we introduce a new large-scale face dataset named VGGFace2. The dataset contains 3.31 million images of 9131 subjects, with an average of 362.6 images for each subject. Images are downloaded from Google Image Search and have large variations in pose, age, illumination, ethnicity and profession (e.g. actors, athletes, politicians). The dataset was collected with three goals in mind: (i) to have both a large number of identities and also a large number of images for each identity; (ii) to cover a large range of pose, age and ethnicity; and (iii) to minimise the label noise. We describe how the dataset was collected, in particular the automated and manual filtering stages to ensure a high accuracy for the images of each identity. To assess face recognition performance using the new dataset, we train ResNet-50 (with and without Squeeze-and-Excitation blocks) Convolutional Neural Networks on VGGFace2, on MS-Celeb-1M, and on their union, and show that training on VGGFace2 leads to improved recognition performance over pose and age. Finally, using the models trained on these datasets, we demonstrate state-of-the-art performance on the IJB-A and IJB-B face recognition benchmarks, exceeding the previous state-of-the-art by a large margin. The dataset and models are publicly available.", "title": "" }, { "docid": "2b34bd00f114ddd7758bf4878edcab45", "text": "This paper considers an UWB balun optimized for a frequency band from 6 to 8.5 GHz. The balun provides a transition from unbalanced coplanar waveguide (CPW) to balanced coplanar stripline (CPS), which is suitable for feeding broadband coplanar antennas such as Vivaldi or bow-tie antennas. It is shown, that applying a solid ground plane under the CPS-to-CPS transition enables decreasing its area by a factor of 4.7. Such compact balun can be used for feeding uniplanar antennas, while significantly saving substrate area. Several transition configurations have been fabricated for single and double-layer configurations. They have been verified by comparison with results both from a full-wave electromagnetic (EM) simulation and experimental measurements.", "title": "" }, { "docid": "17ebf9f15291a3810d57771a8c669227", "text": "We describe preliminary work toward applying a goal reasoning agent for controlling an underwater vehicle in a partially observable, dynamic environment. In preparation for upcoming at-sea tests, our investigation focuses on a notional scenario wherein a autonomous underwater vehicle pursuing a survey goal unexpectedly detects the presence of a potentially hostile surface vessel. Simulations suggest that Goal Driven Autonomy can successfully reason about this scenario using only the limited computational resources typically available on underwater robotic platforms.", "title": "" }, { "docid": "e377063b8fe2d8a12b7c894e11a530e3", "text": "This paper aims at learning to score the figure skating sports videos. To address this task, we propose a deep architecture that includes two complementary components, i.e., Self-Attentive LSTM and Multi-scale Convolutional Skip LSTM. These two components can efficiently learn the local and global sequential information in each video. Furthermore, we present a large-scale figure skating sports video dataset – FisV dataset. This dataset includes 500 figure skating videos with the average length of 2 minutes and 50 seconds. Each video is annotated by two scores of nine different referees, i.e., Total Element Score(TES) and Total Program Component Score (PCS). Our proposed model is validated on FisV and MIT-skate datasets. The experimental results show the effectiveness of our models in learning to score the figure skating videos.", "title": "" }, { "docid": "550070e6bc24986fbc30c58e2171c227", "text": "Detection of anomalous trajectories is an important problem in the surveillance domain. Various algorithms based on learning of normal trajectory patterns have been proposed for this problem. Yet, these algorithms typically suffer from one or more limitations: They are not designed for sequential analysis of incomplete trajectories or online learning based on an incrementally updated training set. Moreover, they typically involve tuning of many parameters, including ad-hoc anomaly thresholds, and may therefore suffer from overfitting and poorly-calibrated alarm rates. In this article, we propose and investigate the Sequential Hausdorff Nearest-Neighbour Conformal Anomaly Detector (SHNN-CAD) for online learning and sequential anomaly detection in trajectories. This is a parameter-light algorithm that offers a well-founded approach to the calibration of the anomaly threshold. The discords algorithm, originally proposed by Keogh et al, is another parameter-light anomaly detection algorithm that has previously been shown to have good classification performance on a wide range of time-series datasets, including trajectory data. We implement and investigate the performance of SHNN-CAD and the discords algorithm on four different labelled trajectory datasets. The results show that SHNN-CAD achieves competitive classification performance with minimum parameter tuning during unsupervised online learning and sequential anomaly detection in trajectories.", "title": "" }, { "docid": "8085ffe018b09505464547242b2e3c21", "text": "Reducible flow graphs occur naturally in connection with flowcharts of computer programs and are used extensively for code optimization and global data flow analysis. In this paper we present an O(n2 log(n2/m)) algorithm for finding a maximum cycle packing in any weighted reducible flow graph with n vertices and m arcs; our algorithm heavily relies on Ramachandran's earlier work concerning reducible flow graphs.", "title": "" }, { "docid": "9593712906aa8272716a7fe5b482b91d", "text": "User stories are a widely used notation for formulating requirements in agile development projects. Despite their popularity in industry, little to no academic work is available on assessing their quality. The few existing approaches are too generic or employ highly qualitative metrics. We propose the Quality User Story Framework, consisting of 14 quality criteria that user story writers should strive to conform to. Additionally, we introduce the conceptual model of a user story, which we rely on to design the AQUSA software tool. AQUSA aids requirements engineers in turning raw user stories into higher-quality ones by exposing defects and deviations from good practice in user stories. We evaluate our work by applying the framework and a prototype implementation to three user story sets from industry.", "title": "" }, { "docid": "4805f0548cb458b7fad623c07ab7176d", "text": "This paper presents a unified control framework for controlling a quadrotor tail-sitter UAV. The most salient feature of this framework is its capability of uniformly treating the hovering and forward flight, and enabling continuous transition between these two modes, depending on the commanded velocity. The key part of this framework is a nonlinear solver that solves for the proper attitude and thrust that produces the required acceleration set by the position controller in an online fashion. The planned attitude and thrust are then achieved by an inner attitude controller that is global asymptotically stable. To characterize the aircraft aerodynamics, a full envelope wind tunnel test is performed on the full-scale quadrotor tail-sitter UAV. In addition to planning the attitude and thrust required by the position controller, this framework can also be used to analyze the UAV's equilibrium state (trimmed condition), especially when wind gust is present. Finally, simulation results are presented to verify the controller's capacity, and experiments are conducted to show the attitude controller's performance.", "title": "" }, { "docid": "3eb0ed6db613c94af266279bc38c1c28", "text": "We can better understand deep neural networks by identifying which features each of their neurons have learned to detect. To do so, researchers have created Deep Visualization techniques including activation maximization, which synthetically generates inputs (e.g. images) that maximally activate each neuron. A limitation of current techniques is that they assume each neuron detects only one type of feature, but we know that neurons can be multifaceted, in that they fire in response to many different types of features: for example, a grocery store class neuron must activate either for rows of produce or for a storefront. Previous activation maximization techniques constructed images without regard for the multiple different facets of a neuron, creating inappropriate mixes of colors, parts of objects, scales, orientations, etc. Here we introduce an algorithm that explicitly uncovers the multiple facets of each neuron by producing a synthetic visualization of each of the types of images that activate a neuron. We also introduce regularization methods that produce state-of-the-art results in terms of the interpretability of images obtained by activation maximization. By separately synthesizing each type of image a neuron fires in response to, the visualizations have more appropriate colors and coherent global structure. Multifaceted feature visualization thus provides a clearer and more comprehensive description of the role of each neuron. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). Figure 1. Top: Visualizations of 8 types of images (feature facets) that activate the same “grocery store” class neuron. Bottom: Example training set images that activate the same neuron, and resemble the corresponding synthetic image in the top panel.", "title": "" }, { "docid": "23a329c63f9a778e3ec38c25fa59748a", "text": "Expedia users who prefer the same types of hotels presumably share other commonalities (i.e., non-hotel commonalities) with each other. With this in mind, Kaggle challenged developers to recommend hotels to Expedia users. Armed with a training set containing data about 37 million Expedia users, we set out to do just that. Our machine-learning algorithms ranged from direct applications of material learned in class to multi-part algorithms with novel combinations of recommender system techniques. Kaggle’s benchmark for randomly guessing a user’s hotel cluster is 0.02260, and the mean average precision K = 5 value for näıve recommender systems is 0.05949. Our best combination of machine-learning algorithms achieved a figure just over 0.30. Our results provide insight into performing multi-class classification on data sets that lack linear structure.", "title": "" }, { "docid": "77d2255e0a2d77ea8b2682937b73cc7d", "text": "Recommendation plays an increasingly important role in our daily lives. Recommender systems automatically suggest to a user items that might be of interest to her. Recent studies demonstrate that information from social networks can be exploited to improve accuracy of recommendations. In this paper, we present a survey of collaborative filtering (CF) based social recommender systems. We provide a brief overview over the task of recommender systems and traditional approaches that do not use social network information. We then present how social network information can be adopted by recommender systems as additional input for improved accuracy. We classify CF-based social recommender systems into two categories: matrix factorization based social recommendation approaches and neighborhood based social recommendation approaches. For each category, we survey and compare several represen-", "title": "" }, { "docid": "4e9005d6f8e1ddcd8d160c66cc61ab41", "text": "Architectural tactics are decisions to efficiently solve quality attributes in software architecture. Security is a complex quality property due to its strong dependence on the application domain. However, the selection of security tactics in the definition of software architecture is guided informally and depends on the experience of the architect. This study presents a methodological approach to address and specify the quality attribute of security in architecture design applying security tactics. The approach is illustrated with a case study about a Tsunami Early Warning System.", "title": "" }, { "docid": "1f613fc1a2e7b29473cf0d3aa53cbb80", "text": "The visualization and analysis of dynamic social networks are challenging problems, demanding the simultaneous consideration of relational and temporal aspects. In order to follow the evolution of a network over time, we need to detect not only which nodes and which links change and when these changes occur, but also the impact they have on their neighbourhood and on the overall relational structure. Aiming to enhance the perception of structural changes at both the micro and the macro level, we introduce the change centrality metric. This novel metric, as well as a set of further metrics we derive from it, enable the pair wise comparison of subsequent states of an evolving network in a discrete-time domain. Demonstrating their exploitation to enrich visualizations, we show how these change metrics support the visual analysis of network dynamics.", "title": "" }, { "docid": "e0f88ddc85cfe4cdcbe761b85d2781d8", "text": "Intermodal Transportation Systems (ITS) are logistics networks integrating different transportation services, designed to move goods from origin to destination in a timely manner and using intermodal transportation means. This paper addresses the problem of the modeling and management of ITS at the operational level considering the impact that the new Information and Communication Technologies (ICT) tools can have on management and control of these systems. An effective ITS model at the operational level should focus on evaluating performance indices describing activities, resources and concurrency, by integrating information and financial flows. To this aim, ITS are regarded as discrete event systems and are modeled in a Petri net framework. We consider as a case study the ferry terminal of Trieste (Italy) that is described and simulated in different operative conditions characterized by different types of ICT solutions and information. The simulation results show that ICT have a huge potential for efficient real time management and operation of ITS, as well as an effective impact on the infrastructures.", "title": "" }, { "docid": "63b283d40abcccd17b4771535ac000e4", "text": "Developing agents to engage in complex goaloriented dialogues is challenging partly because the main learning signals are very sparse in long conversations. In this paper, we propose a divide-and-conquer approach that discovers and exploits the hidden structure of the task to enable efficient policy learning. First, given successful example dialogues, we propose the Subgoal Discovery Network (SDN) to divide a complex goal-oriented task into a set of simpler subgoals in an unsupervised fashion. We then use these subgoals to learn a multi-level policy by hierarchical reinforcement learning. We demonstrate our method by building a dialogue agent for the composite task of travel planning. Experiments with simulated and real users show that our approach performs competitively against a state-of-theart method that requires human-defined subgoals. Moreover, we show that the learned subgoals are often human comprehensible.", "title": "" }, { "docid": "83926511ab8ce222f02e96820c8feb68", "text": "The grounding system design for GIS indoor substation is proposed in this paper. The design concept of equipotential ground grids in substation building as well as connection of GIS enclosures to main ground grid is described. The main ground grid design is performed according to IEEE Std. 80-2000. The real case study of grounding system design for 120 MVA, 69-24 kV distribution substation in MEA's power system is demonstrated.", "title": "" }, { "docid": "d18faf207a0dbccc030e5dcc202949ab", "text": "This manuscript conducts a comparison on modern object detection systems in their ability to detect multiple maritime vessel classes. Three highly scoring algorithms from the Pascal VOC Challenge, Histogram of Oriented Gradients by Dalal and Triggs, Exemplar-SVM by Malisiewicz, and Latent-SVM with Deformable Part Models by Felzenszwalb, were compared to determine performance of recognition within a specific category rather than the general classes from the original challenge. In all cases, the histogram of oriented edges was used as the feature set and support vector machines were used for classification. A summary and comparison of the learning algorithms is presented and a new image corpus of maritime vessels was collected. Precision-recall results show improved recognition performance is achieved when accounting for vessel pose. In particular, the deformable part model has the best performance when considering the various components of a maritime vessel.", "title": "" }, { "docid": "b2cb59b7464c3d7ead4fe3d70410a49c", "text": "X-ray measurements of the hip joints of children, with special reference to the acetabular index, suggest that the upper standard deviation of normal comprises the borderline to a critical zone where extreme values of normal and pathologic hips were found together. Above the double standard deviation only severe dysplasias were present. Investigations of the shaft-neck angle and the degree of anteversion including the wide standard deviation demonstrate that it is very difficult to determine where these angles become pathologic. It is more important to look for the relationship between femoral head and acetabulum. A new measurement--the Hip Value is based on measurements of the Idelberg- Frank angle, the Wiberg angle and MZ-distance of decentralization. By statistical methods, normal and pathological joints can be separated as follows: in adult Hip Values, between 6 and 15 indicate a normal joint form; values between 16 and 21 indicate a slight deformation and values of 22 and above are indications of a severe deformation, in children in the normal range the Hip Value reaches 14; values of 15 and up are pathological.", "title": "" } ]
scidocsrr
1defe92f13d92c65f2dce69e045109d4
Classification-Driven Watershed Segmentation
[ { "docid": "5f31e3405af91cd013c3193c7d3cdd8d", "text": "In this paper, we review most major filtering approaches to texture feature extraction and perform a comparative study. Filtering approaches included are Laws masks, ring/wedge filters, dyadic Gabor filter banks, wavelet transforms, wavelet packets and wavelet frames, quadrature mirror filters, discrete cosine transform, eigenfilters, optimized Gabor filters, linear predictors, and optimized finite impulse response filters. The features are computed as the local energy of the filter responses. The effect of the filtering is highlighted, keeping the local energy function and the classification algorithm identical for most approaches. For reference, comparisons with two classical nonfiltering approaches, co-occurrence (statistical) and autoregressive (model based) features, are given. We present a ranking of the tested approaches based on extensive experiments.", "title": "" }, { "docid": "6206968905f6e211b07e896f49ecdc57", "text": "We present here a new algorithm for segmentation of intensity images which is robust, rapid, and free of tuning parameters. The method, however, requires the input of a number of seeds, either individual pixels or regions, which will control the formation of regions into which the image will be segmented. In this correspondence, we present the algorithm, discuss briefly its properties, and suggest two ways in which it can be employed, namely, by using manual seed selection or by automated procedures.", "title": "" } ]
[ { "docid": "dc62e382c60237ae71ebeab6d9be93ea", "text": "Deep reinforcement learning for multi-agent cooperation and competition has been a hot topic recently. This paper focuses on cooperative multi-agent problem based on actor-critic methods under local observations settings. Multi agent deep deterministic policy gradient obtained state of art results for some multi-agent games, whereas, it cannot scale well with growing amount of agents. In order to boost scalability, we propose a parameter sharing deterministic policy gradient method with three variants based on neural networks, including actor-critic sharing, actor sharing and actor sharing with partially shared critic. Benchmarks from rllab show that the proposed method has advantages in learning speed and memory efficiency, well scales with growing amount of agents, and moreover, it can make full use of reward sharing and exchangeability if possible.", "title": "" }, { "docid": "03371f6200ebf2bdf0807e41a998550c", "text": "As next-generation sequencing projects generate massive genome-wide sequence variation data, bioinformatics tools are being developed to provide computational predictions on the functional effects of sequence variations and narrow down the search of casual variants for disease phenotypes. Different classes of sequence variations at the nucleotide level are involved in human diseases, including substitutions, insertions, deletions, frameshifts, and non-sense mutations. Frameshifts and non-sense mutations are likely to cause a negative effect on protein function. Existing prediction tools primarily focus on studying the deleterious effects of single amino acid substitutions through examining amino acid conservation at the position of interest among related sequences, an approach that is not directly applicable to insertions or deletions. Here, we introduce a versatile alignment-based score as a new metric to predict the damaging effects of variations not limited to single amino acid substitutions but also in-frame insertions, deletions, and multiple amino acid substitutions. This alignment-based score measures the change in sequence similarity of a query sequence to a protein sequence homolog before and after the introduction of an amino acid variation to the query sequence. Our results showed that the scoring scheme performs well in separating disease-associated variants (n = 21,662) from common polymorphisms (n = 37,022) for UniProt human protein variations, and also in separating deleterious variants (n = 15,179) from neutral variants (n = 17,891) for UniProt non-human protein variations. In our approach, the area under the receiver operating characteristic curve (AUC) for the human and non-human protein variation datasets is ∼0.85. We also observed that the alignment-based score correlates with the deleteriousness of a sequence variation. In summary, we have developed a new algorithm, PROVEAN (Protein Variation Effect Analyzer), which provides a generalized approach to predict the functional effects of protein sequence variations including single or multiple amino acid substitutions, and in-frame insertions and deletions. The PROVEAN tool is available online at http://provean.jcvi.org.", "title": "" }, { "docid": "9d700ef057eb090336d761ebe7f6acb0", "text": "This article presents initial results on a supervised machine learning approach to determine the semantics of noun compounds in Dutch and Afrikaans. After a discussion of previous research on the topic, we present our annotation methods used to provide a training set of compounds with the appropriate semantic class. The support vector machine method used for this classification experiment utilizes a distributional lexical semantics representation of the compound’s constituents to make its classification decision. The collection of words that occur in the near context of the constituent are considered an implicit representation of the semantics of this constituent. Fscores were reached of 47.8% for Dutch and 51.1% for Afrikaans. Keywords—compound semantics; Afrikaans; Dutch; machine learning; distributional methods", "title": "" }, { "docid": "b4edd546c786bbc7a72af67439dfcad7", "text": "We aim to develop a computationally feasible, cognitivelyinspired, formal model of concept invention, drawing on Fauconnier and Turner’s theory of conceptual blending, and grounding it on a sound mathematical theory of concepts. Conceptual blending, although successfully applied to describing combinational creativity in a varied number of fields, has barely been used at all for implementing creative computational systems, mainly due to the lack of sufficiently precise mathematical characterisations thereof. The model we will define will be based on Goguen’s proposal of a Unified Concept Theory, and will draw from interdisciplinary research results from cognitive science, artificial intelligence, formal methods and computational creativity. To validate our model, we will implement a proof of concept of an autonomous computational creative system that will be evaluated in two testbed scenarios: mathematical reasoning and melodic harmonisation. We envisage that the results of this project will be significant for gaining a deeper scientific understanding of creativity, for fostering the synergy between understanding and enhancing human creativity, and for developing new technologies for autonomous creative systems.", "title": "" }, { "docid": "b893e0321a51a2b06e1d8f2a59a296b6", "text": "Green tea (GT) and green tea extracts (GTE) have been postulated to decrease cancer incidence. In vitro results indicate a possible effect; however, epidemiological data do not support cancer chemoprevention. We have performed a PubMED literature search for green tea consumption and the correlation to the common tumor types lung, colorectal, breast, prostate, esophageal and gastric cancer, with cohorts from both Western and Asian countries. We additionally included selected mechanistical studies for a possible mode of action. The comparability between studies was limited due to major differences in study outlines; a meta analysis was thus not possible and studies were evaluated individually. Only for breast cancer could a possible small protective effect be seen in Asian and Western cohorts, whereas for esophagus and stomach cancer, green tea increased the cancer incidence, possibly due to heat stress. No effect was found for colonic/colorectal and prostatic cancer in any country, for lung cancer Chinese studies found a protective effect, but not studies from outside China. Epidemiological studies thus do not support a cancer protective effect. GT as an indicator of as yet undefined parameters in lifestyle, environment and/or ethnicity may explain some of the observed differences between China and other countries.", "title": "" }, { "docid": "d880349c2760a8cd71d86ea3212ba1f0", "text": "As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.", "title": "" }, { "docid": "b46801d2903131bcfbc12bdd457ddbe7", "text": "Indicators of Compromise (IOCs) are artifacts observed on a network or in an operating system that can be utilized to indicate a computer intrusion and detect cyber-attacks in an early stage. Thus, they exert an important role in the field of cybersecurity. However, state-of-the-art IOCs detection systems rely heavily on hand-crafted features with expert knowledge of cybersecurity, and require a large amount of supervised training corpora to train an IOC classifier. In this paper, we propose using a neural-based sequence labelling model to identify IOCs automatically from reports on cybersecurity without expert knowledge of cybersecurity. Our work is the first to apply an end-to-end sequence labelling to the task in IOCs identification. By using an attention mechanism and several token spelling features, we find that the proposed model is capable of identifying the low frequency IOCs from long sentences contained in cybersecurity reports. Experiments show that the proposed model outperforms other sequence labelling models, achieving over 88% average F1-score.", "title": "" }, { "docid": "cf6f0a6d53c3b615f27a20907e6eb93f", "text": "OBJECTIVE\nWe sought to investigate whether a low-fat vegan diet improves glycemic control and cardiovascular risk factors in individuals with type 2 diabetes.\n\n\nRESEARCH DESIGN AND METHODS\nIndividuals with type 2 diabetes (n = 99) were randomly assigned to a low-fat vegan diet (n = 49) or a diet following the American Diabetes Association (ADA) guidelines (n = 50). Participants were evaluated at baseline and 22 weeks.\n\n\nRESULTS\nForty-three percent (21 of 49) of the vegan group and 26% (13 of 50) of the ADA group participants reduced diabetes medications. Including all participants, HbA(1c) (A1C) decreased 0.96 percentage points in the vegan group and 0.56 points in the ADA group (P = 0.089). Excluding those who changed medications, A1C fell 1.23 points in the vegan group compared with 0.38 points in the ADA group (P = 0.01). Body weight decreased 6.5 kg in the vegan group and 3.1 kg in the ADA group (P < 0.001). Body weight change correlated with A1C change (r = 0.51, n = 57, P < 0.0001). Among those who did not change lipid-lowering medications, LDL cholesterol fell 21.2% in the vegan group and 10.7% in the ADA group (P = 0.02). After adjustment for baseline values, urinary albumin reductions were greater in the vegan group (15.9 mg/24 h) than in the ADA group (10.9 mg/24 h) (P = 0.013).\n\n\nCONCLUSIONS\nBoth a low-fat vegan diet and a diet based on ADA guidelines improved glycemic and lipid control in type 2 diabetic patients. These improvements were greater with a low-fat vegan diet.", "title": "" }, { "docid": "02209c1215a39c17b4099603ef700c97", "text": "The goal of the Automated Evaluation of Scientific Writing (AESW) Shared Task is to analyze the linguistic characteristics of scientific writing to promote the development of automated writing evaluation tools that can assist authors in writing scientific papers. The proposed task is to predict whether a given sentence requires editing to ensure its “fit” with the scientific writing genre. We describe the proposed task, training, development, and test data sets, and evaluation metrics. Quality means doing it right when no one is looking. – Henry Ford", "title": "" }, { "docid": "7c1b3ba1b8e33ed866ae90b3ddf80ce6", "text": "This paper presents a universal tuning system for harmonic operation of series-resonant inverters (SRI), based on a self-oscillating switching method. In the new tuning system, SRI can instantly operate in one of the switching frequency harmonics, e.g., the first, third, or fifth harmonic. Moreover, the new system can utilize pulse density modulation (PDM), phase shift (PS), and power–frequency control methods for each harmonic. Simultaneous combination of PDM and PS control method is also proposed for smoother power regulation. In addition, this paper investigates performance of selected harmonic operation based on phase-locked loop (PLL) circuits. In comparison with the fundamental harmonic operation, PLL circuits suffer from stability problem for the other harmonic operations. The proposed method has been verified using laboratory prototypes with resonant frequencies of 20 up to 75 kHz and output power of about 200 W.", "title": "" }, { "docid": "9b123e0cf32118094b803323d1073b99", "text": "The lack of sufficient labeled Web pages in many languages, especially for those uncommonly used ones, presents a great challenge to traditional supervised classification methods to achieve satisfactory Web page classification performance. To address this, we propose a novel Nonnegative Matrix Tri-factorization (NMTF) based Dual Knowledge Transfer (DKT) approach for cross-language Web page classification, which is based on the following two important observations. First, we observe that Web pages for a same topic from different languages usually share some common semantic patterns, though in different representation forms. Second, we also observe that the associations between word clusters and Web page classes are a more reliable carrier than raw words to transfer knowledge across languages. With these recognitions, we attempt to transfer knowledge from the auxiliary language, in which abundant labeled Web pages are available, to target languages, in which we want classify Web pages, through two different paths: word cluster approximations and the associations between word clusters and Web page classes. Due to the reinforcement between these two different knowledge transfer paths, our approach can achieve better classification accuracy. We evaluate the proposed approach in extensive experiments using a real world cross-language Web page data set. Promising results demonstrate the effectiveness of our approach that is consistent with our theoretical analyses.", "title": "" }, { "docid": "d6f1278ccb6de695200411137b85b89a", "text": "The complexity of information systems is increasing in recent years, leading to increased effort for maintenance and configuration. Self-adaptive systems (SASs) address this issue. Due to new computing trends, such as pervasive computing, miniaturization of IT leads to mobile devices with the emerging need for context adaptation. Therefore, it is beneficial that devices are able to adapt context. Hence, we propose to extend the definition of SASs and include context adaptation. This paper presents a taxonomy of self-adaptation and a survey on engineering SASs. Based on the taxonomy and the survey, we motivate a new perspective on SAS including context adaptation.", "title": "" }, { "docid": "c174facf9854db5aae149e82f9f2a239", "text": "A new feeding technique for printed Log-periodic dipole arrays (LPDAs) is presented, and used to design a printed LPDA operating between 4 and 18 GHz. The antenna has been designed using CST MICROWAVE STUDIO 2010, and the simulation results show that the antenna can be used as an Ultra Wideband Antenna in the range 6-9 GHz.", "title": "" }, { "docid": "e473c5133203e8f1b937ec9dae7cd469", "text": "The Data Warehouse (DW) design remains a great challenge process for DW designers. As well, so far, there is no strong method to support the requirements analysis process in DW projects. The literature approaches try to solve this tedious and important issue; however, many of these approaches ignore or bypass the requirements elicitation phase. In this paper, we propose a method to generate multidimensional schemas from decisional requirements. We elected natural language (NL) like syntax for expressing decisional/business users' needs. Our approach distinguishes from existing ones in that it: i) is NL-based for requirements elicitation; ii) uses a matrix representation to normalize users' requirements, iii) automates the generation of star schemas relying on eight specific heuristics. We developed SSReq (Star Schemas from Requirements) prototype to demonstrate the feasibility of our approach illustrated with a real case study.", "title": "" }, { "docid": "3a6c58a05427392750d15307fda4faec", "text": "In this paper, we present the design of a low voltage bandgap reference (LVBGR) circuit for supply voltage of 1.2V which can generate an output reference voltage of 0.363V. Traditional BJT based bandgap reference circuits give very precise output reference but power and area consumed by these BJT devices is larger so for low supply bandgap reference we chose MOSFETs operating in subthreshold region based reference circuits. LVBGR circuits with less sensitivity to supply voltage and temperature is used in both analog and digital circuits like high precise comparators used in data converter, phase-locked loop, ring oscillator, memory systems, implantable biomedical product etc. In the proposed circuit subthreshold MOSFETs temperature characteristics are used to achieve temperature compensation of output voltage reference and it can work under very low supply voltage. A PMOS structure 2stage opamp which will be operating in subthreshold region is designed for the proposed LVBGR circuit whose gain is 89.6dB and phase margin is 74 °. Finally a LVBGR circuit is designed which generates output voltage reference of 0.364V given with supply voltage of 1.2 V with 10 % variation and temperature coefficient of 240ppm/ °C is obtained for output reference voltage variation with respect to temperature over a range of 0 to 100°C. The output reference voltage exhibits a variation of 230μV with a supply range of 1.08V to 1.32V at typical process corner. The proposed LVBGR circuit for 1.2V supply is designed with the Mentor Graphics Pyxis tool using 130nm technology with EldoSpice simulator. Overall current consumed by the circuit is 900nA and also the power consumed by the entire LVBGR circuit is 0.9μW and the PSRR of the LVBGR circuit is -70dB.", "title": "" }, { "docid": "cdb83e9a31172d6687622dc7ac841c91", "text": "Introduction Various forms of social media are used by many mothers to maintain social ties and manage the stress associated with their parenting roles and responsibilities. ‘Mommy blogging’ as a specific type of social media usage is a common and growing phenomenon, but little is known about mothers’ blogging-related experiences and how these may contribute to their wellbeing. This exploratory study investigated the blogging-related motivations and goals of Australian mothers. Methods An online survey was emailed to members of an Australian online parenting community. The survey included open-ended questions that invited respondents to discuss their motivations and goals for blogging. A thematic analysis using a grounded approach was used to analyze the qualitative data obtained from 235 mothers. Results Five primary motivations for blogging were identified: developing connections with others, experiencing heightened levels of mental stimulation, achieving self-validation, contributing to the welfare of others, and extending skills and abilities. Discussion These motivations are discussed in terms of their various properties and dimensions to illustrate how these mothers appear to use blogging to enhance their psychological wellbeing.", "title": "" }, { "docid": "f77495366909b9713463bebf2b4ff2fc", "text": "This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 % for average translational error and 0.0143°/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.", "title": "" }, { "docid": "9b7ff8a7dec29de5334f3de8d1a70cc3", "text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.", "title": "" }, { "docid": "fefa533d5abb4be0afe76d9a7bbd9435", "text": "Keyphrases are useful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, highlighting, browsing, and searching. The task of automatic keyphrase extraction is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. A limitation of previous keyphrase extraction algorithms is that the selected keyphrases are occasionally incoherent. That is, the majority of the output keyphrases may fit together well, but there may be a minority that appear to be outliers, with no clear semantic relation to the majority or to each other. This paper presents enhancements to the Kea keyphrase extraction algorithm that are designed to increase the coherence of the extracted keyphrases. The approach is to use the degree of statistical association among candidate keyphrases as evidence that they may be semantically related. The statistical association is measured using web mining. Experiments demonstrate that the enhancements improve the quality of the extracted keyphrases. Furthermore, the enhancements are not domain-specific: the algorithm generalizes well when it is trained on one domain (computer science documents) and tested on another (physics documents).", "title": "" }, { "docid": "e41e5221116a7b63c2238fc4541c1d4c", "text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii CHAPTER", "title": "" } ]
scidocsrr
d7d808b8f227180a5b507e274d286096
Almost Linear VC-Dimension Bounds for Piecewise Polynomial Networks
[ { "docid": "40b78c5378159e9cdf38275a773b8109", "text": "For a common class of artificial neural networks, the mean integrated squared error between the estimated network and a target function f is shown to be bounded by $${\\text{O}}\\left( {\\frac{{C_f^2 }}{n}} \\right) + O(\\frac{{ND}}{N}\\log N)$$ where n is the number of nodes, d is the input dimension of the function, N is the number of training observations, and C f is the first absolute moment of the Fourier magnitude distribution of f. The two contributions to this total risk are the approximation error and the estimation error. Approximation error refers to the distance between the target function and the closest neural network function of a given architecture and estimation error refers to the distance between this ideal network function and an estimated network function. With n ~ C f(N/(dlog N))1/2 nodes, the order of the bound on the mean integrated squared error is optimized to be O(C f((d/N)log N)1/2). The bound demonstrates surprisingly favorable properties of network estimation compared to traditional series and nonparametric curve estimation techniques in the case that d is moderately large. Similar bounds are obtained when the number of nodes n is not preselected as a function of C f (which is generally not known a priori), but rather the number of nodes is optimized from the observed data by the use of a complexity regularization or minimum description length criterion. The analysis involves Fourier techniques for the approximation error, metric entropy considerations for the estimation error, and a calculation of the index of resolvability of minimum complexity estimation of the family of networks.", "title": "" } ]
[ { "docid": "3e23069ba8a3ec3e4af942727c9273e9", "text": "This paper describes an automated tool called Dex (difference extractor) for analyzing syntactic and semantic changes in large C-language code bases. It is applied to patches obtained from a source code repository, each of which comprises the code changes made to accomplish a particular task. Dex produces summary statistics characterizing these changes for all of the patches that are analyzed. Dex applies a graph differencing algorithm to abstract semantic graphs (ASGs) representing each version. The differences are then analyzed to identify higher-level program changes. We describe the design of Dex, its potential applications, and the results of applying it to analyze bug fixes from the Apache and GCC projects. The results include detailed information about the nature and frequency of missing condition defects in these projects.", "title": "" }, { "docid": "990d811789fd5025d784a147facf9d07", "text": "1389-1286/$ see front matter 2012 Elsevier B.V http://dx.doi.org/10.1016/j.comnet.2012.06.016 ⇑ Corresponding author. Tel.: +216 96 819 500. E-mail addresses: olfa.gaddour@enis.rnu.tn (O isep.ipp.pt (A. Koubâa). IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) is a routing protocol specifically designed for Low power and Lossy Networks (LLN) compliant with the 6LoWPAN protocol. It currently shows up as an RFC proposed by the IETF ROLL working group. However, RPL has gained a lot of maturity and is attracting increasing interest in the research community. The absence of surveys about RPL motivates us to write this paper, with the objective to provide a quick introduction to RPL. In addition, we present the most relevant research efforts made around RPL routing protocol that pertain to its performance evaluation, implementation, experimentation, deployment and improvement. We also present an experimental performance evaluation of RPL for different network settings to understand the impact of the protocol attributes on the network behavior, namely in terms of convergence time, energy, packet loss and packet delay. Finally, we point out open research challenges on the RPL design. We believe that this survey will pave the way for interested researchers to understand its behavior and contributes for further relevant research works. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "76dd20f0464ff42badc5fd4381eed256", "text": "C therapy (CBT) approaches are rooted in the fundamental principle that an individual’s cognitions play a significant and primary role in the development and maintenance of emotional and behavioral responses to life situations. In CBT models, cognitive processes, in the form of meanings, judgments, appraisals, and assumptions associated with specific life events, are the primary determinants of one’s feelings and actions in response to life events and thus either facilitate or hinder the process of adaptation. CBT includes a range of approaches that have been shown to be efficacious in treating posttraumatic stress disorder (PTSD). In this chapter, we present an overview of leading cognitive-behavioral approaches used in the treatment of PTSD. The treatment approaches discussed here include cognitive therapy/reframing, exposure therapies (prolonged exposure [PE] and virtual reality exposure [VRE]), stress inoculation training (SIT), eye movement desensitization and reprocessing (EMDR), and Briere’s selftrauma model (1992, 1996, 2002). In our discussion of each of these approaches, we include a description of the key assumptions that frame the particular approach and the main strategies associated with the treatment. In the final section of this chapter, we review the growing body of research that has evaluated the effectiveness of cognitive-behavioral treatments for PTSD.", "title": "" }, { "docid": "1b76b9d3f1326e8f6522f3cdd2c276bb", "text": "Classifier has been widely applied in machine learning, such as pattern recognition, medical diagnosis, credit scoring, banking and weather prediction. Because of the limited local storage at user side, data and classifier has to be outsourced to cloud for storing and computing. However, due to privacy concerns, it is important to preserve the confidentiality of data and classifier in cloud computing because the cloud servers are usually untrusted. In this work, we propose a framework for privacy-preserving outsourced classification in cloud computing (POCC). Using POCC, an evaluator can securely train a classification model over the data encrypted with different public keys, which are outsourced from the multiple data providers. We prove that our scheme is secure in the semi-honest model", "title": "" }, { "docid": "6d2d9de5db5b03a98a26efc8453588d8", "text": "In this paper we describe a system for use on a mobile robot that detects potential loop closures using both the visual and spatial appearance of the local scene. Loop closing is the act of correctly asserting that a vehicle has returned to a previously visited location. It is an important component in the search to make SLAM (Simultaneous Localization and Mapping) the reliable technology it should be. Paradoxically, it is hardest in the presence of substantial errors in vehicle pose estimates which is exactly when it is needed most. The contribution of this paper is to show how a principled and robust description of local spatial appearance (using laser rangefinder data) can be combined with a purely camera based system to produce superior performance. Individual spatial components (segments) of the local structure are described using a rotationally invariant shape descriptor and salient aspects thereof, and entropy as measure of their innate complexity. Comparisons between scenes are made using relative entropy and by examining the mutual arrangement of groups of segments. We show the inclusion of spatial information allows the resolution of ambiguities stemming from repetitive visual artifacts in urban settings. Importantly the method we present is entirely independent of the navigation and or mapping process and so is entirely unaffected by gross errors in pose estimation.", "title": "" }, { "docid": "4f7c1a965bcde03dedf1702c85b2ce77", "text": "Strategic managers are consistently faced with the decision of how to allocate scarce corporate resources in an environment that is placing more and more pressures on them. Recent scholarship in strategic management suggests that many of these pressures come directly from sources associated with social issues in management, rather than traditional arenas of strategic management. Using a greatly-improved source of data on corporate social performance, this paper reports the results of a rigorous study of the empirical linkages between financial and social performance. CSP is found to be positively associated with prior financial performance, supporting the theory that slack resource availability and CSP are positively related. CSP is also found to be positively associated with future financial performance, supporting the theory that good management and CSP are positively related. Post-print version of an article published in Strategic Management Journal 18(4): 303-319 (1997 April). doi: 10.1002/(SICI)1097-0266(199704)18:4<303::AID-SMJ869>3.0.CO;2-G", "title": "" }, { "docid": "02621546c67e6457f350d0192b616041", "text": "Binary embedding of high-dimensional data requires long codes to preserve the discriminative power of the input space. Traditional binary coding methods often suffer from very high computation and storage costs in such a scenario. To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix. The circulant structure enables the use of Fast Fourier Transformation to speed up the computation. Compared to methods that use unstructured matrices, the proposed method improves the time complexity from O(d) to O(d log d), and the space complexity from O(d) to O(d) where d is the input dimensionality. We also propose a novel time-frequency alternating optimization to learn data-dependent circulant projections, which alternatively minimizes the objective in original and Fourier domains. We show by extensive experiments that the proposed approach gives much better performance than the state-of-the-art approaches for fixed time, and provides much faster computation with no performance degradation for fixed number of bits.", "title": "" }, { "docid": "caaca962473382e40a08f90240cc88b6", "text": "Lysergic acid diethylamide (LSD) was synthesized in 1938 and its psychoactive effects discovered in 1943. It was used during the 1950s and 1960s as an experimental drug in psychiatric research for producing so-called \"experimental psychosis\" by altering neurotransmitter system and in psychotherapeutic procedures (\"psycholytic\" and \"psychedelic\" therapy). From the mid 1960s, it became an illegal drug of abuse with widespread use that continues today. With the entry of new methods of research and better study oversight, scientific interest in LSD has resumed for brain research and experimental treatments. Due to the lack of any comprehensive review since the 1950s and the widely dispersed experimental literature, the present review focuses on all aspects of the pharmacology and psychopharmacology of LSD. A thorough search of the experimental literature regarding the pharmacology of LSD was performed and the extracted results are given in this review. (Psycho-) pharmacological research on LSD was extensive and produced nearly 10,000 scientific papers. The pharmacology of LSD is complex and its mechanisms of action are still not completely understood. LSD is physiologically well tolerated and psychological reactions can be controlled in a medically supervised setting, but complications may easily result from uncontrolled use by layman. Actually there is new interest in LSD as an experimental tool for elucidating neural mechanisms of (states of) consciousness and there are recently discovered treatment options with LSD in cluster headache and with the terminally ill.", "title": "" }, { "docid": "c7d71b7bb07f62f4b47d87c9c4bae9b3", "text": "Smart contracts are full-fledged programs that run on blockchains (e.g., Ethereum, one of the most popular blockchains). In Ethereum, gas (in Ether, a cryptographic currency like Bitcoin) is the execution fee compensating the computing resources of miners for running smart contracts. However, we find that under-optimized smart contracts cost more gas than necessary, and therefore the creators or users will be overcharged. In this work, we conduct the first investigation on Solidity, the recommended compiler, and reveal that it fails to optimize gas-costly programming patterns. In particular, we identify 7 gas-costly patterns and group them to 2 categories. Then, we propose and develop GASPER, a new tool for automatically locating gas-costly patterns by analyzing smart contracts' bytecodes. The preliminary results on discovering 3 representative patterns from 4,240 real smart contracts show that 93.5%, 90.1% and 80% contracts suffer from these 3 patterns, respectively.", "title": "" }, { "docid": "7f9b7f50432d04968a1fb62855481eda", "text": "BACKGROUND/PURPOSE\nAccurate prenatal diagnosis of complex anatomic connections and associated anomalies has only been possible recently with the use of ultrasonography, echocardiography, and fetal magnetic resonance imaging (MRI). To assess the impact of improved antenatal diagnosis in the management and outcome of conjoined twins, the authors reviewed their experience with 14 cases.\n\n\nMETHODS\nA retrospective review of prenatally diagnosed conjoined twins referred to our institution from 1996 to present was conducted.\n\n\nRESULTS\nIn 14 sets of conjoined twins, there were 10 thoracoomphalopagus, 2 dicephalus tribrachius dipus, 1 ischiopagus, and 1 ischioomphalopagus. The earliest age at diagnosis was 9 weeks' gestation (range, 9 to 29; mean, 20). Prenatal imaging with ultrasonography, echocardiography, and ultrafast fetal MRI accurately defined the shared anatomy in all cases. Associated anomalies included cardiac malformations (11 of 14), congenital diaphragmatic hernia (4 of 14), abdominal wall defects (2 of 14), and imperforate anus (2 of 14). Three sets of twins underwent therapeutic abortion, 1 set of twins died in utero, and 10 were delivered via cesarean section at a mean gestational age of 34 weeks. There were 5 individual survivors in the series after separation (18%). In one case, in which a twin with a normal heart perfused the cotwin with a rudimentary heart, the ex utero intrapartum treatment procedure (EXIT) was utilized because of concern that the normal twin would suffer immediate cardiac decompensation at birth. This EXIT-to-separation strategy allowed prompt control of the airway and circulation before clamping the umbilical cord and optimized control over a potentially emergent situation, leading to survival of the normal cotwin. In 2 sets of twins in which each twin had a normal heart, tissue expanders were inserted before separation.\n\n\nCONCLUSIONS\nAdvances in prenatal diagnosis allow detailed, accurate evaluations of conjoined twins. Careful prenatal studies may uncover cases in which emergent separation at birth is lifesaving.", "title": "" }, { "docid": "58677916e11e6d5401b7396d117a517b", "text": "This work contributes to the development of a common framework for the discussion and analysis of dexterous manipulation across the human and robotic domains. An overview of previous work is first provided along with an analysis of the tradeoffs between arm and hand dexterity. A hand-centric and motion-centric manipulation classification is then presented and applied in four different ways. It is first discussed how the taxonomy can be used to identify a manipulation strategy. Then, applications for robot hand analysis and engineering design are explained. Finally, the classification is applied to three activities of daily living (ADLs) to distinguish the patterns of dexterous manipulation involved in each task. The same analysis method could be used to predict problem ADLs for various impairments or to produce a representative benchmark set of ADL tasks. Overall, the classification scheme proposed creates a descriptive framework that can be used to effectively describe hand movements during manipulation in a variety of contexts and might be combined with existing object centric or other taxonomies to provide a complete description of a specific manipulation task.", "title": "" }, { "docid": "b8b4e582fbcc23a5a72cdaee1edade32", "text": "In recent years, research into the mining of user check-in behavior for point-of-interest (POI) recommendations has attracted a lot of attention. Existing studies on this topic mainly treat such recommendations in a traditional manner—that is, they treat POIs as items and check-ins as ratings. However, users usually visit a place for reasons other than to simply say that they have visited. In this article, we propose an approach referred to as Urban POI-Walk (UPOI-Walk), which takes into account a user's social-triggered intentions (SI), preference-triggered intentions (PreI), and popularity-triggered intentions (PopI), to estimate the probability of a user checking-in to a POI. The core idea of UPOI-Walk involves building a HITS-based random walk on the normalized check-in network, thus supporting the prediction of POI properties related to each user's preferences. To achieve this goal, we define several user--POI graphs to capture the key properties of the check-in behavior motivated by user intentions. In our UPOI-Walk approach, we propose a new kind of random walk model—Dynamic HITS-based Random Walk—which comprehensively considers the relevance between POIs and users from different aspects. On the basis of similitude, we make an online recommendation as to the POI the user intends to visit. To the best of our knowledge, this is the first work on urban POI recommendations that considers user check-in behavior motivated by SI, PreI, and PopI in location-based social network data. Through comprehensive experimental evaluations on two real datasets, the proposed UPOI-Walk is shown to deliver excellent performance.", "title": "" }, { "docid": "7856e64f16a6b57d8f8743d94ea9f743", "text": "Unconsciousness is a fundamental component of general anesthesia (GA), but anesthesiologists have no reliable ways to be certain that a patient is unconscious. To develop EEG signatures that track loss and recovery of consciousness under GA, we recorded high-density EEGs in humans during gradual induction of and emergence from unconsciousness with propofol. The subjects executed an auditory task at 4-s intervals consisting of interleaved verbal and click stimuli to identify loss and recovery of consciousness. During induction, subjects lost responsiveness to the less salient clicks before losing responsiveness to the more salient verbal stimuli; during emergence they recovered responsiveness to the verbal stimuli before recovering responsiveness to the clicks. The median frequency and bandwidth of the frontal EEG power tracked the probability of response to the verbal stimuli during the transitions in consciousness. Loss of consciousness was marked simultaneously by an increase in low-frequency EEG power (<1 Hz), the loss of spatially coherent occipital alpha oscillations (8-12 Hz), and the appearance of spatially coherent frontal alpha oscillations. These dynamics reversed with recovery of consciousness. The low-frequency phase modulated alpha amplitude in two distinct patterns. During profound unconsciousness, alpha amplitudes were maximal at low-frequency peaks, whereas during the transition into and out of unconsciousness, alpha amplitudes were maximal at low-frequency nadirs. This latter phase-amplitude relationship predicted recovery of consciousness. Our results provide insights into the mechanisms of propofol-induced unconsciousness, establish EEG signatures of this brain state that track transitions in consciousness precisely, and suggest strategies for monitoring the brain activity of patients receiving GA.", "title": "" }, { "docid": "ac62d57dac1a363275ddf989881d194a", "text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.08.010 ⇑ Corresponding author. Address: College of De University, 1239 Siping Road, Shanghai 200092, PR 6598 3432. E-mail addresses: huchenliu@foxmaill.com (H.-C (L. Liu), liunan@cqjtu.edu.cn (N. Liu). Failure mode and effects analysis (FMEA) is a risk assessment tool that mitigates potential failures in systems, processes, designs or services and has been used in a wide range of industries. The conventional risk priority number (RPN) method has been criticized to have many deficiencies and various risk priority models have been proposed in the literature to enhance the performance of FMEA. However, there has been no literature review on this topic. In this study, we reviewed 75 FMEA papers published between 1992 and 2012 in the international journals and categorized them according to the approaches used to overcome the limitations of the conventional RPN method. The intention of this review is to address the following three questions: (i) Which shortcomings attract the most attention? (ii) Which approaches are the most popular? (iii) Is there any inadequacy of the approaches? The answers to these questions will give an indication of current trends in research and the best direction for future research in order to further address the known deficiencies associated with the traditional FMEA. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4deb101ba94ef958cfe84610f2abccc4", "text": "Iris recognition is considered to be the most reliable and accurate biometric identification system available. Iris recognition system captures an image of an individual’s eye, the iris in the image is then meant for the further segmentation and normalization for extracting its feature. The performance of iris recognition systems depends on the process of segmentation. Segmentation is used for the localization of the correct iris region in the particular portion of an eye and it should be done accurately and correctly to remove the eyelids, eyelashes, reflection and pupil noises present in iris region. In our paper we are using Daughman’s Algorithm segmentation method for Iris Recognition. Iris images are selected from the CASIA Database, then the iris and pupil boundary are detected from rest of the eye image, removing the noises. The segmented iris region was normalized to minimize the dimensional inconsistencies between iris regions by using Daugman’s Rubber Sheet Model. Then the features of the iris were encoded by convolving the normalized iris region with 1D Log-Gabor filters and phase quantizing the output in order to produce a bit-wise biometric template. The Hamming distance was chosen as a matching metric, which gave the measure of how many bits disagreed between the templates of the iris. Index Terms —Daughman’s Algorithm, Daugman’s Rubber Sheet Model, Hamming Distance, Iris Recognition, segmentation.", "title": "" }, { "docid": "ca70bf377f8823c2ecb1cdd607c064ec", "text": "To date, few studies have compared the effectiveness of topical silicone gels versus that of silicone gel sheets in preventing scars. In this prospective study, we compared the efficacy and the convenience of use of the 2 products. We enrolled 30 patients who had undergone a surgical procedure 2 weeks to 3 months before joining the study. These participants were randomly assigned to 2 treatment arms: one for treatment with a silicone gel sheet, and the other for treatment with a topical silicone gel. Vancouver Scar Scale (VSS) scores were obtained for all patients; in addition, participants completed scoring patient questionnaires 1 and 3 months after treatment onset. Our results reveal not only that no significant difference in efficacy exists between the 2 products but also that topical silicone gels are more convenient to use. While previous studies have advocated for silicone gel sheets as first-line therapies in postoperative scar management, we maintain that similar effects can be expected with topical silicone gel. The authors recommend that, when clinicians have a choice of silicone-based products for scar prevention, they should focus on each patient's scar location, lifestyle, and willingness to undergo scar prevention treatment.", "title": "" }, { "docid": "af6464d1e51cb59da7affc73977eed71", "text": "Recommender systems leverage both content and user interactions to generate recommendations that fit users' preferences. The recent surge of interest in deep learning presents new opportunities for exploiting these two sources of information. To recommend items we propose to first learn a user-independent high-dimensional semantic space in which items are positioned according to their substitutability, and then learn a user-specific transformation function to transform this space into a ranking according to the user's past preferences. An advantage of the proposed architecture is that it can be used to effectively recommend items using either content that describes the items or user-item ratings. We show that this approach significantly outperforms state-of-the-art recommender systems on the MovieLens 1M dataset.", "title": "" }, { "docid": "1b3c37f20cc341f50c7d12c425bc94af", "text": "Vertex is a Wrapper Induction system developed at Yahoo! for extracting structured records from template-based Web pages. To operate at Web scale, Vertex employs a host of novel algorithms for (1) Grouping similar structured pages in a Web site, (2) Picking the appropriate sample pages for wrapper inference, (3) Learning XPath-based extraction rules that are robust to variations in site structure, (4) Detecting site changes by monitoring sample pages, and (5) Optimizing editorial costs by reusing rules, etc. The system is deployed in production and currently extracts more than 250 million records from more than 200 Web sites. To the best of our knowledge, Vertex is the first system to do high-precision information extraction at Web scale.", "title": "" }, { "docid": "66638a2a66f6829f5b9ac72e4ace79ed", "text": "The Theory of Waste Management is a unified body of knowledge about waste and waste management, and it is founded on the expectation that waste management is to prevent waste to cause harm to human health and the environment and promote resource use optimization. Waste Management Theory is to be constructed under the paradigm of Industrial Ecology as Industrial Ecology is equally adaptable to incorporate waste minimization and/or resource use optimization goals and values.", "title": "" } ]
scidocsrr
7869dcc5bcfb069ecbf790ca41cbe38b
Hybrid Approach for Emotion Classification of Audio Conversation Based on Text and Speech Mining
[ { "docid": "cfbf63d92dfafe4ac0243acdff6cf562", "text": "In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named W ORDNETAFFECT) was developed starting from W ORDNET, through a selection and tagging of a subset of synsets representing the affective", "title": "" } ]
[ { "docid": "16c87d75564404d52fc2abac55297931", "text": "SHADE is an adaptive DE which incorporates success-history based parameter adaptation and one of the state-of-the-art DE algorithms. This paper proposes L-SHADE, which further extends SHADE with Linear Population Size Reduction (LPSR), which continually decreases the population size according to a linear function. We evaluated the performance of L-SHADE on CEC2014 benchmarks and compared its search performance with state-of-the-art DE algorithms, as well as the state-of-the-art restart CMA-ES variants. The experimental results show that L-SHADE is quite competitive with state-of-the-art evolutionary algorithms.", "title": "" }, { "docid": "ba5cd7dcf8d7e9225df1d9dc69c95c11", "text": "Œe e‚ective of information retrieval (IR) systems have become more important than ever. Deep IR models have gained increasing aŠention for its ability to automatically learning features from raw text; thus, many deep IR models have been proposed recently. However, the learning process of these deep IR models resemble a black box. Œerefore, it is necessary to identify the di‚erence between automatically learned features by deep IR models and hand-cra‰ed features used in traditional learning to rank approaches. Furthermore, it is valuable to investigate the di‚erences between these deep IR models. Œis paper aims to conduct a deep investigation on deep IR models. Speci€cally, we conduct an extensive empirical study on two di‚erent datasets, including Robust and LETOR4.0. We €rst compared the automatically learned features and handcra‰ed features on the respects of query term coverage, document length, embeddings and robustness. It reveals a number of disadvantages compared with hand-cra‰ed features. Œerefore, we establish guidelines for improving existing deep IR models. Furthermore, we compare two di‚erent categories of deep IR models, i.e. representation-focused models and interaction-focused models. It is shown that two types of deep IR models focus on di‚erent categories of words, including topic-related words and query-related words.", "title": "" }, { "docid": "9d82ce8e6630a9432054ed97752c7ec6", "text": "Development is the powerful process involving a genome in the transformation from one egg cell to a multicellular organism with many cell types. The dividing cells manage to organize and assign themselves special, differentiated roles in a reliable manner, creating a spatio-temporal pattern and division of labor. This despite the fact that little positional information may be available to them initially to guide this patterning. Inspired by a model of developmental biologist L. Wolpert, we simulate this situation in an evolutionary setting where individuals have to grow into “French flag” patterns. The cells in our model exist in a 2-layer Potts model physical environment. Controlled by continuous genetic regulatory networks, identical for all cells of one individual, the cells can individually differ in parameters including target volume, shape, orientation, and diffusion. Intercellular communication is possible via secretion and sensing of diffusing morphogens. Evolved individuals growing from a single cell can develop the French flag pattern by setting up and maintaining asymmetric morphogen gradients – a behavior predicted by several theoretical models.", "title": "" }, { "docid": "130139c25f42dbf9c779e5fc3db5f721", "text": "Among many movies that have been released, some generate high profit while the others do not. This paper studies the relationship between movie factors and its revenue and build prediction models. Besides analysis on aggregate data, we also divide data into groups using different methods and compare accuracy across these techniques as well as explore whether clustering techniques could help improve accuracy. Specifically, two major steps were employed. Initially, linear regression, polynomial regression and support vector regression (SVR) were applied on the entire movie data to predict the movie revenue. Then, clustering techniques, such as by genre, using Expectation Maximization (EM) and using K-means were applied to divide data into groups before regression analyses are executed. To compare accuracy among different techniques, R-square and the root-mean-square error (RMSE) were used as a performance indicator. Our study shows that generally linear regression without clustering offers the model with the highest R-square, while linear regression with EM clustering yields the lowest RMSE.", "title": "" }, { "docid": "110742230132649f178d2fa99c8ffade", "text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.", "title": "" }, { "docid": "9b658cf50907e117fdc071ff5d60f8ba", "text": "Ontology-based data access (OBDA) is a new paradigm aiming at accessing and managing data by means of an ontology, i.e., a conceptual representation of the domain of interest in the underlying information system. In the last years, this new paradigm has been used for providing users with abstract (independent from technological and system-oriented aspects), effective, and reasoning-intensive mechanisms for querying the data residing at the information system sources. In this paper we argue that OBDA, besides querying data, provides the right principles for devising a formal approach to data quality. In particular, we concentrate on one of the most important dimensions considered both in the literature and in the practice of data quality, namely consistency. We define a general framework for data consistency in OBDA, and present algorithms and complexity analysis for several relevant tasks related to the problem of checking data quality under this dimension, both at the extensional level (content of the data sources), and at the intensional level (schema of the", "title": "" }, { "docid": "41a0f95ef912cb6adf072ee33064589d", "text": "This paper proposes an active capacitive sensing circuit for fingerprint sensors, which includes a pixel level charge-sharing and charge pump to replace an ADC. This paper also proposes the operating algorithm for 16-level gray scale image. The active capacitive technology is more flexible and can be adjusted to adapt to a wide range of different skin types and environments. The proposed novel circuit is composed with unit gain buffer, 6-stage charge pump and analog comparator. The proper operation is validated by the HSPICE simulation of one pixel with condition of 0.35μm typical CMOS parameter and 3.3V power.", "title": "" }, { "docid": "5a4a6328fc88fbe32a81c904135b05c9", "text": "Semi-supervised learning plays a significant role in multi-class classification, where a small number of labeled data are more deterministic while substantial unlabeled data might cause large uncertainties and potential threats. In this paper, we distinguish the label fitting of labeled and unlabeled training data through a probabilistic vector with an adaptive parameter, which always ensures the significant importance of labeled data and characterizes the contribution of unlabeled instance according to its uncertainty. Instead of using traditional least squares regression (LSR) for classification, we develop a new discriminative LSR by equipping each label with an adjustment vector. This strategy avoids incorrect penalization on samples that are far away from the boundary and simultaneously facilitates multi-class classification by enlarging the geometrical distance of instances belonging to different classes. An efficient alternative algorithm is exploited to solve the proposed model with closed form solution for each updating rule. We also analyze the convergence and complexity of the proposed algorithm theoretically. Experimental results on several benchmark datasets demonstrate the effectiveness and superiority of the proposed model for multi-class classification tasks.", "title": "" }, { "docid": "75a9715ce9eaffaa43df5470ad7cacca", "text": "Resting frontal electroencephalographic (EEG) asymmetry has been hypothesized as a marker of risk for major depressive disorder (MDD), but the extant literature is based predominately on female samples. Resting frontal asymmetry was assessed on 4 occasions within a 2-week period in 306 individuals aged 18-34 (31% male) with (n = 143) and without (n = 163) lifetime MDD as defined by the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (American Psychiatric Association, 1994). Lifetime MDD was linked to relatively less left frontal activity for both sexes using a current source density (CSD) reference, findings that were not accounted for solely by current MDD status or current depression severity, suggesting that CSD-referenced EEG asymmetry is a possible endophenotype for depression. In contrast, results for average and linked mastoid references were less consistent but demonstrated a link between less left frontal activity and current depression severity in women.", "title": "" }, { "docid": "23fe6b01d4f31e69e753ff7c78674f19", "text": "Advancements in information technology often task users with complex and consequential privacy and security decisions. A growing body of research has investigated individuals’ choices in the presence of privacy and information security tradeoffs, the decision-making hurdles affecting those choices, and ways to mitigate such hurdles. This article provides a multi-disciplinary assessment of the literature pertaining to privacy and security decision making. It focuses on research on assisting individuals’ privacy and security choices with soft paternalistic interventions that nudge users toward more beneficial choices. The article discusses potential benefits of those interventions, highlights their shortcomings, and identifies key ethical, design, and research challenges.", "title": "" }, { "docid": "a56552cb8ab102fb73a5824634e2c027", "text": "In this paper, a tutorial overview on anomaly detection for hyperspectral electro-optical systems is presented. This tutorial is focused on those techniques that aim to detect small man-made anomalies typically found in defense and surveillance applications. Since a variety of methods have been proposed for detecting such targets, this tutorial places emphasis on the techniques that are either mathematically more tractable or easier to interpret physically. These methods are not only more suitable for a tutorial publication, but also an essential to a study of anomaly detection. Previous surveys on this subject have focused mainly on anomaly detectors developed in a statistical framework and have been based on well-known background statistical models. However, the most recent research trends seem to move away from the statistical framework and to focus more on deterministic and geometric concepts. This work also takes into consideration these latest trends, providing a wide theoretical review without disregarding practical recommendations about algorithm implementation. The main open research topics are addressed as well, the foremost being algorithm optimization, which is required for embodying anomaly detectors in real-time systems.", "title": "" }, { "docid": "6ae33cdc9601c90f9f3c1bda5aa8086f", "text": "A k-uniform hypergraph is hamiltonian if for some cyclic ordering of its vertex set, every k consecutive vertices form an edge. In 1952 Dirac proved that if the minimum degree in an n-vertex graph is at least n/2 then the graph is hamiltonian. We prove an approximate version of an analogous result for uniform hypergraphs: For every k ≥ 3 and γ > 0, and for all n large enough, a sufficient condition for an n-vertex k-uniform hypergraph to be hamiltonian is that each (k − 1)-element set of vertices is contained in at least (1/2 + γ)n edges. Research supported by NSF grant DMS-0300529. Research supported by KBN grant 2 P03A 015 23 and N201036 32/2546. Part of research performed at Emory University, Atlanta. Research supported by NSF grant DMS-0100784", "title": "" }, { "docid": "5935224c53222d0234adffddae23eb04", "text": "The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s.", "title": "" }, { "docid": "fe407c5c554096543ab05550599b369a", "text": "The IMT 2020 requirements of 20 Gb/s peak data rate and 1 ms latency present significant engineering challenges for the design of 5G cellular systems. Systems that make use of the mmWave bands above 10 GHz ---where large regions of spectrum are available --- are a promising 5G candidate that may be able to rise to the occasion. However, although the mmWave bands can support massive peak data rates, delivering these data rates for end-to-end services while maintaining reliability and ultra-low-latency performance to support emerging applications and use cases will require rethinking all layers of the protocol stack. This article surveys some of the challenges and possible solutions for delivering end-to-end, reliable, ultra-low-latency services in mmWave cellular systems in terms of the MAC layer, congestion control, and core network architecture.", "title": "" }, { "docid": "450fdd88aa45a405eace9a5a1e0113f7", "text": "DNN-based cross-modal retrieval has become a research hotspot, by which users can search results across various modalities like image and text. However, existing methods mainly focus on the pairwise correlation and reconstruction error of labeled data. They ignore the semantically similar and dissimilar constraints between different modalities, and cannot take advantage of unlabeled data. This paper proposes Cross-modal Deep Metric Learning with Multi-task Regularization (CDMLMR), which integrates quadruplet ranking loss and semi-supervised contrastive loss for modeling cross-modal semantic similarity in a unified multi-task learning architecture. The quadruplet ranking loss can model the semantically similar and dissimilar constraints to preserve cross-modal relative similarity ranking information. The semi-supervised contrastive loss is able to maximize the semantic similarity on both labeled and unlabeled data. Compared to the existing methods, CDMLMR exploits not only the similarity ranking information but also unlabeled cross-modal data, and thus boosts cross-modal retrieval accuracy.", "title": "" }, { "docid": "74724f58c6542a75f7510ac79571c90d", "text": "The World Wide Web is moving from a Web of hyper-linked Documents to a Web of linked Data. Thanks to the Semantic Web spread and to the more recent Linked Open Data (LOD) initiative, a vast amount of RDF data have been published in freely accessible datasets. These datasets are connected with each other to form the so called Linked Open Data cloud. As of today, there are tons of RDF data available in the Web of Data, but only few applications really exploit their potential power. In this paper we show how these data can successfully be used to develop a recommender system (RS) that relies exclusively on the information encoded in the Web of Data. We implemented a content-based RS that leverages the data available within Linked Open Data datasets (in particular DBpedia, Freebase and LinkedMDB) in order to recommend movies to the end users. We extensively evaluated the approach and validated the effectiveness of the algorithms by experimentally measuring their accuracy with precision and recall metrics.", "title": "" }, { "docid": "b106be5cb0510e93b556a14f00877c3b", "text": "BACKGROUND\nNurses' behavior in Educational-Medical centers is very important for improving the condition of patients. Ethical climate represents the ethical values and behavioral expectations. Attitude of people toward religion is both intrinsic and extrinsic. Different ethical climates and attitude toward religion could be associated with nurses' behavior.\n\n\nAIM\nTo study the mediating effect of ethical climate on religious orientation and ethical behaviors of nurses.\n\n\nRESEARCH DESIGN\nIn an exploratory analysis study, the path analysis method was used to identify the effective variables on ethical behavior. Participants/context: The participants consisted of 259 Iranian nurses from Hamadan University of Medical Sciences. Ethical considerations: This project with an ethical code and a unique ID IR.UMSHA.REC.1395.67 was approved in the Research Council of Hamadan University of Medical Sciences.\n\n\nFINDINGS\nThe beta coefficients obtained by regression analysis of perception of ethical climate of individual egoism (B = -0.202, p < 0.001), individual ethical principles (B = -0.184, p = 0.001), local egoism (B = -0.136, p = 0.003), and extrinsic religious orientation (B = -0.266, p = 0.007) were significant that they could act as predictors of ethical behavior. The summary of regression model indicated that 0.27% of ethical behaviors of nurses are justified by two variables: ethical climate and religious orientation.\n\n\nDISCUSSION AND CONCLUSION\nIntrinsic religious orientation has the most direct impact and then, respectively, the variables of ethical climate of perceptions in the dimensions of individual egoism, individual ethical principles, local egoism, global ethical principle, and ethical behavior and extrinsic religious orientation follow. All the above, except global ethical principles and intrinsic orientation of religion have a negative effect on ethical behavior and can be predictors of ethical behavior. Therefore, applying strategies to promote theories of intrinsic religious orientation and global ethical principles in different situations of nursing is recommended.", "title": "" }, { "docid": "43882b64eec2667444a992d4da5484dd", "text": "Past research demonstrates that children learn from a previously accurate speaker rather than from a previously inaccurate one. This study shows that children do not necessarily treat a previously inaccurate speaker as unreliable. Rather, they appropriately excuse past inaccuracy arising from the speaker's limited information access. Children (N= 67) aged 3, 4, and 5 years aimed to identify a hidden toy in collaboration with a puppet as informant. When the puppet had previously been inaccurate despite having full information, children tended to ignore what they were told and guess for themselves: They treated the puppet as unreliable in the longer term. However, children more frequently believed a currently well-informed puppet whose past inaccuracies arose legitimately from inadequate information access.", "title": "" }, { "docid": "e464859fd25c6bdcf266ceec090af9f2", "text": "AC ◦ MOD2 circuits are AC circuits augmented with a layer of parity gates just above the input layer. We study AC ◦MOD2 circuit lower bounds for computing the Boolean Inner Product functions. Recent works by Servedio and Viola (ECCC TR12-144) and Akavia et al. (ITCS 2014) have highlighted this problem as a frontier problem in circuit complexity that arose both as a first step towards solving natural special cases of the matrix rigidity problem and as a candidate for constructing pseudorandom generators of minimal complexity. We give the first superlinear lower bound for the Boolean Inner Product function against AC ◦MOD2 of depth four or greater. Specifically, we prove a superlinear lower bound for circuits of arbitrary constant depth, and an Ω̃(n) lower bound for the special case of depth-4 AC ◦MOD2. Our proof of the depth-4 lower bound employs a new “moment-matching” inequality for bounded, nonnegative integer-valued random variables that may be of independent interest: we prove an optimal bound on the maximum difference between two discrete distributions’ values at 0, given that their first d moments match. ∗Simons Institute for the Theory of Computing, University of California, Berkeley, CA. Email: cheraghchi@berkeley.edu. Supported by a Qualcomm fellowship. †Department of Computer Science, Purdue University, West Lafayette, IN. Email: elena-g@purdue.edu. ‡Department of Computer Science and Engineering, Washington University, St. Louis, MO. Email: bjuba@wustl.edu. Supported by an AFOSR Young Investigator award. §Department of Mathematics, Duquesne University, Pittsburgh, PA. Email: wimmerk@duq.edu. Supported by NSF award CCF-1117079. ¶SCIS, Florida International University, Miami, FL. Email: nxie@cs.fiu.edu. Research supported in part by NSF grant 1423034.", "title": "" }, { "docid": "b2382c9b14526bf7fe526e4d3dc82601", "text": "We have proposed, fabricated, and studied a new design of a high-speed optical non-volatile memory. The recoding mechanism of the proposed memory utilizes a magnetization reversal of a nanomagnet by a spin-polarized photocurrent. It was shown experimentally that the operational speed of this memory may be extremely fast above 1 TBit/s. The challenges to realize both a high-speed recording and a high-speed reading are discussed. The memory is compact, integratable, and compatible with present semiconductor technology. If realized, it will advance data processing and computing technology towards a faster operation speed.", "title": "" } ]
scidocsrr
c452c6a4553d343cefe3fd686b2c8692
Analyzing Argumentative Discourse Units in Online Interactions
[ { "docid": "d7a348b092064acf2d6a4fd7d6ef8ee2", "text": "Argumentation theory involves the analysis of naturally occurring argument, and one key tool employed to this end both in the academic community and in teaching critical thinking skills to undergraduates is argument diagramming. By identifying the structure of an argument in terms of its constituents and the relationships between them, it becomes easier to critically evaluate each part of an argument in turn. The task of analysis and diagramming, however, is labor intensive and often idiosyncratic, which can make academic exchange difficult. The Araucaria system provides an interface which supports the diagramming process, and then saves the result using AML, an open standard, designed in XML, for describing argument structure. Araucaria aims to be of use not only in pedagogical situations, but also in support of research activity. As a result, it has been designed from the outset to handle more advanced argumentation theoretic concepts such as schemes, which capture stereotypical patterns of reasoning. The software is also designed to be compatible with a number of applications under development, including dialogic interaction and online corpus provision. Together, these features, combined with its platform independence and ease of use, have the potential to make Araucaria a valuable resource for the academic community.", "title": "" }, { "docid": "5f7adc28fab008d93a968b6a1e5ad061", "text": "This paper describes recent approaches using text-mining to automatically profile and extract arguments from legal cases. We outline some of the background context and motivations. We then turn to consider issues related to the construction and composition of a corpora of legal cases. We show how a Context-Free Grammar can be used to extract arguments, and how ontologies and Natural Language Processing can identify complex information such as case factors and participant roles. Together the results bring us closer to automatic identification of legal arguments.", "title": "" } ]
[ { "docid": "a3ad2be5b2b44277026ee9f84c0d416b", "text": "In order to attain a useful balanced scorecard (BSC), appropriate performance perspectives and indicators are crucial to reflect all strategies of the organisation. The objectives of this survey were to give an insight regarding the situation of the BSC in the health sector over the past decade, and to afford a generic approach of the BSC development for health settings with specific focus on performance perspectives, performance indicators and BSC generation. After an extensive search based on publication date and research content, 29 articles published since 2002 were identified, categorised and analysed. Four critical attributes of each article were analysed, including BSC generation, performance perspectives, performance indicators and auxiliary tools. The results showed that 'internal business process' was the most notable BSC perspective as it was included in all reviewed articles. After investigating the literature, it was concluded that its comprehensiveness is the reason for the importance and high usage of this perspective. The findings showed that 12 cases out of 29 reviewed articles (41%) exceeded the maximum number of key performance indicators (KPI) suggested in a previous study. It was found that all 12 cases were large organisations with numerous departments (e.g. national health organisations). Such organisations require numerous KPI to cover all of their strategic objectives. It was recommended to utilise the cascaded BSC within such organisations to avoid complexity and difficulty in gathering, analysing and interpreting performance data. Meanwhile it requires more medical staff to contribute in BSC development, which will result in greater reliability of the BSC.", "title": "" }, { "docid": "7e0b9941d5019927fce0a1223a88d6b5", "text": "Representation and recognition of events in a video is important for a number of tasks such as video surveillance, video browsing and content based video indexing. This paper describes the results of a \"Challenge Project on Video Event Taxonomy\" sponsored by the Advanced Research and Development Activity (ARDA) of the U.S. Government in the summer and fall of 2003. The project brought together more than 30 researchers in computer vision and knowledge representation and representatives of the user community. It resulted in the development of a formal language for describing an ontology of events, which we call VERL (Video Event Representation Language) and a companion language called VEML (Video Event Markup Language) to annotate instances of the events described in VERL. This paper provides a summary of VERL and VEML as well as the considerations associated with the specific design choices.", "title": "" }, { "docid": "799ccd75d6781e38cf5e2faee5784cae", "text": "Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets – Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.", "title": "" }, { "docid": "d3f97e0de15ab18296e161e287890e18", "text": "Nosocomial or hospital acquired infections threaten the survival and neurodevelopmental outcomes of infants admitted to the neonatal intensive care unit, and increase cost of care. Premature infants are particularly vulnerable since they often undergo invasive procedures and are dependent on central catheters to deliver nutrition and on ventilators for respiratory support. Prevention of nosocomial infection is a critical patient safety imperative, and invariably requires a multidisciplinary approach. There are no short cuts. Hand hygiene before and after patient contact is the most important measure, and yet, compliance with this simple measure can be unsatisfactory. Alcohol based hand sanitizer is effective against many microorganisms and is efficient, compared to plain or antiseptic containing soaps. The use of maternal breast milk is another inexpensive and simple measure to reduce infection rates. Efforts to replicate the anti-infectious properties of maternal breast milk by the use of probiotics, prebiotics, and synbiotics have met with variable success, and there are ongoing trials of lactoferrin, an iron binding whey protein present in large quantities in colostrum. Attempts to boost the immunoglobulin levels of preterm infants with exogenous immunoglobulins have not been shown to reduce nosocomial infections significantly. Over the last decade, improvements in the incidence of catheter-related infections have been achieved, with meticulous attention to every detail from insertion to maintenance, with some centers reporting zero rates for such infections. Other nosocomial infections like ventilator acquired pneumonia and staphylococcus aureus infection remain problematic, and outbreaks with multidrug resistant organisms continue to have disastrous consequences. Management of infections is based on the profile of microorganisms in the neonatal unit and community and targeted therapy is required to control the disease without leading to the development of more resistant strains.", "title": "" }, { "docid": "3dd8c177ae928f7ccad2aa980bd8c747", "text": "The quality and nature of knowledge that can be found by an automated knowledge-extraction system depends on its inputs. For systems that learn by reading text, the Web offers a breadth of topics and currency, but it also presents the problems of dealing with casual, unedited writing, non-textual inputs, and the mingling of languages. The results of extraction using the KNEXT system on two Web corpora – Wikipedia and a collection of weblog entries – indicate that, with automatic filtering of the output, even ungrammatical writing on arbitrary topics can yield an extensive knowledge base, which human judges find to be of good quality, with propositions receiving an average score across both corpora of 2.34 (where the range is 1 to 5 and lower is better) versus 3.00 for unfiltered output from the same sources.", "title": "" }, { "docid": "03dc797bafa51245791de2b7c663a305", "text": "In many applications of computational geometry to modeling objects and processes in the physical world, the participating objects are in a state of continuous change. Motion is the most ubiquitous kind of continuous transformation but others, such as shape deformation, are also possible. In a recent paper, Baech, Guibas, and Hershberger [BGH97] proposed the framework of kinetic data structures (KDSS) as a way to maintain, in a completely on-line fashion, desirable information about the state of a geometric system in continuous motion or change. They gave examples of kinetic data structures for the maximum of a set of (changing) numbers, and for the convex hull and closest pair of a set of (moving) points in the plane. The KDS frameworkallowseach object to change its motion at will according to interactions with other moving objects, the environment, etc. We implemented the KDSSdescribed in [BGH97],es well as came alternative methods serving the same purpose, as a way to validate the kinetic data structures framework in practice. In this note, we report some preliminary results on the maintenance of the convex hull, describe the experimental setup, compare three alternative methods, discuss the value of the measures of quality for KDSS proposed by [BGH97],and highlight some important numerical issues.", "title": "" }, { "docid": "d8143c0b083defa15182e079b23bdfe8", "text": "OBJECTIVES\nThe purpose of this study was to compare the incidence of genital injury following penile-vaginal penetration with and without consent.\n\n\nDESIGN\nThis study compared observations of genital injuries from two cohorts.\n\n\nSETTING\nParticipants were drawn from St. Mary's Sexual Assault Referral Centre and a general practice surgery in Manchester, and a general practice surgery in Buckinghamshire.\n\n\nPARTICIPANTS\nTwo cohorts were recruited: a retrospective cohort of 500 complainants referred to a specialist Sexual Assault Referral Centre (the Cases) and 68 women recruited at the time of their routine cervical smear test who had recently had sexual intercourse (the Comparison group).\n\n\nMAIN OUTCOME MEASURES\nPresence of genital injuries.\n\n\nRESULTS\n22.8% (n=00, 95% CI 19.2-26.7) of adult complainants of penile-vaginal rape by a single assailant sustained an injury to the genitalia that was visible within 48h of the incident. This was approximately three times more than the 5.9% (n=68, 95% CI 1.6-14.4) of women who sustained a genital injury during consensual sex. This was a statistically significant difference (a<0.05, p=0.0007). Factors such as hormonal status, position during intercourse, criminal justice outcome, relationship to assailant, and the locations, sizes and types of injuries were also considered but the only factor associated with injury was the relationship with the complainant, with an increased risk of injury if the assailant was known to the complainant (p=0.019).\n\n\nCONCLUSIONS\nMost complainants of rape (n=500, 77%, 95% CI 73-81%) will not sustain any genital injury, although women are three times more likely to sustain a genital injury from an assault than consensual intercourse.", "title": "" }, { "docid": "1add7dcbe4f7c666e0453d5fa6661b31", "text": "Convolutive blind source separation (CBSS) that exploits the sparsity of source signals in the frequency domain is addressed in this paper. We assume the sources follow complex Laplacian-like distribution for complex random variable, in which the real part and imaginary part of complex-valued source signals are not necessarily independent. Based on the maximum a posteriori (MAP) criterion, we propose a novel natural gradient method for complex sparse representation. Moreover, a new CBSS method is further developed based on complex sparse representation. The developed CBSS algorithm works in the frequency domain. Here, we assume that the source signals are sufficiently sparse in the frequency domain. If the sources are sufficiently sparse in the frequency domain and the filter length of mixing channels is relatively small and can be estimated, we can even achieve underdetermined CBSS. We illustrate the validity and performance of the proposed learning algorithm by several simulation examples.", "title": "" }, { "docid": "890a2092f3f55799e9c0216dac3d9e2f", "text": "The rise in popularity of permissioned blockchain platforms in recent time is significant. Hyperledger Fabric is one such permissioned blockchain platform and one of the Hyperledger projects hosted by the Linux Foundation. The Fabric comprises various components such as smart-contracts, endorsers, committers, validators, and orderers. As the performance of blockchain platform is a major concern for enterprise applications, in this work, we perform a comprehensive empirical study to characterize the performance of Hyperledger Fabric and identify potential performance bottlenecks to gain a better understanding of the system. We follow a two-phased approach. In the first phase, our goal is to understand the impact of various configuration parameters such as block size, endorsement policy, channels, resource allocation, state database choice on the transaction throughput & latency to provide various guidelines on configuring these parameters. In addition, we also aim to identify performance bottlenecks and hotspots. We observed that (1) endorsement policy verification, (2) sequential policy validation of transactions in a block, and (3) state validation and commit (with CouchDB) were the three major bottlenecks. In the second phase, we focus on optimizing Hyperledger Fabric v1.0 based on our observations. We introduced and studied various simple optimizations such as aggressive caching for endorsement policy verification in the cryptography component (3x improvement in the performance) and parallelizing endorsement policy verification (7x improvement). Further, we enhanced and measured the effect of an existing bulk read/write optimization for CouchDB during state validation & commit phase (2.5x improvement). By combining all three optimizations1, we improved the overall throughput by 16x (i.e., from 140 tps to 2250 tps).", "title": "" }, { "docid": "fe903498e0c3345d7e5ebc8bf3407c2f", "text": "This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.", "title": "" }, { "docid": "de0761b7a43cafe7f30d6f8e518dd031", "text": "The Internet of Things (IOT) has been denoted as a new wave of information and communication technology (ICT) advancements. The IOT is a multidisciplinary concept that encompasses a wide range of several technologies, application domains, device capabilities, and operational strategies, etc. The ongoing IOT research activities are directed towards the definition and design of standards and open architectures which is still have the issues requiring a global consensus before the final deployment. This paper gives over view about IOT technologies and applications related to agriculture with comparison of other survey papers and proposed a novel irrigation management system. Our main objective of this work is to for Farming where various new technologies to yield higher growth of the crops and their water supply. Automated control features with latest electronic technology using microcontroller which turns the pumping motor ON and OFF on detecting the dampness content of the earth and GSM phone line is proposed after measuring the temperature, humidity, and soil moisture.", "title": "" }, { "docid": "ef08ef786fd759b33a7d323c69be19db", "text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.", "title": "" }, { "docid": "d9950f75380758d0a0f4fd9d6e885dfd", "text": "In recent decades, the interactive whiteboard (IWB) has become a relatively common educational tool in Western schools. The IWB is essentially a large touch screen, that enables the user to interact with digital content in ways that are not possible with an ordinary computer-projector-canvas setup. However, the unique possibilities of IWBs are rarely leveraged to enhance teaching and learning beyond the primary school level. This is particularly noticeable in high school physics. We describe how a high school physics teacher learned to use an IWB in a new way, how she planned and implemented a lesson on the topic of orbital motion of planets, and what tensions arose in the process. We used an ethnographic approach to account for the teacher’s and involved students’ perspectives throughout the process of teacher preparation, lesson planning, and the implementation of the lesson. To interpret the data, we used the conceptual framework of activity theory. We found that an entrenched culture of traditional white/blackboard use in physics instruction interferes with more technologically innovative and more student-centered instructional approaches that leverage the IWB’s unique instructional potential. Furthermore, we found that the teacher’s confidence in the mastery of the IWB plays a crucial role in the teacher’s willingness to transfer agency within the lesson to the students.", "title": "" }, { "docid": "4c3d8c30223ef63b54f8c7ba3bd061ed", "text": "There is much recent work on using the digital footprints left by people on social media to predict personal traits and gain a deeper understanding of individuals. Due to the veracity of social media, imperfections in prediction algorithms, and the sensitive nature of one's personal traits, much research is still needed to better understand the effectiveness of this line of work, including users' preferences of sharing their computationally derived traits. In this paper, we report a two- part study involving 256 participants, which (1) examines the feasibility and effectiveness of automatically deriving three types of personality traits from Twitter, including Big 5 personality, basic human values, and fundamental needs, and (2) investigates users' opinions of using and sharing these traits. Our findings show there is a potential feasibility of automatically deriving one's personality traits from social media with various factors impacting the accuracy of models. The results also indicate over 61.5% users are willing to share their derived traits in the workplace and that a number of factors significantly influence their sharing preferences. Since our findings demonstrate the feasibility of automatically inferring a user's personal traits from social media, we discuss their implications for designing a new generation of privacy-preserving, hyper-personalized systems.", "title": "" }, { "docid": "b5214fd5f8f8849a57d453b47f1d73f0", "text": "The development of Graphical User Interface (GUI) is meant to significantly increase the ease of usability of software applications so that the can be used by users from different backgrounds and knowledge level. Such a development becomes even more important and challenging when the users are those that have limited literacy capabilities. Although the progress of development for standard software interface has increased significantly, similar progress has not been available in interface for illiterate people. To fill this gap, this paper presents our research on developing interface of software application devoted to illiterate people. In particular, the proposed interface was designed for mobile application and combines graphic design and linguistic approaches. With such feature, the developed interface is expected to provide easy to use application for illiterate people.", "title": "" }, { "docid": "6c9d84ced9dd23cdb7542a50f1459fef", "text": "This article outlines a framework for the analysis of economic integration and its relation to the asymmetries of economic and social development. Consciously breaking with state-centric forms of social science, it argues for a research agenda that is more adequate to the exigencies and consequences of globalisation than has traditionally been the case in 'development studies'. Drawing on earlier attempts to analyse the crossborder activities of firms, their spatial configurations and developmental consequences, the article moves beyond these by proposing the framework of the 'global production network' (GPN). It explores the conceptual elements involved in this framework in some detail and then turns to sketch a stylised example of a GPN. The article concludes with a brief indication of the benefits that could be delivered be research informed by GPN analysis.", "title": "" }, { "docid": "98cd53e6bf758a382653cb7252169d22", "text": "We introduce a novel malware detection algorithm based on the analysis of graphs constructed from dynamically collected instruction traces of the target executable. These graphs represent Markov chains, where the vertices are the instructions and the transition probabilities are estimated by the data contained in the trace. We use a combination of graph kernels to create a similarity matrix between the instruction trace graphs. The resulting graph kernel measures similarity between graphs on both local and global levels. Finally, the similarity matrix is sent to a support vector machine to perform classification. Our method is particularly appealing because we do not base our classifications on the raw n-gram data, but rather use our data representation to perform classification in graph space. We demonstrate the performance of our algorithm on two classification problems: benign software versus malware, and the Netbull virus with different packers versus other classes of viruses. Our results show a statistically significant improvement over signature-based and other machine learning-based detection methods.", "title": "" }, { "docid": "6927647b1e1f6bf9bcf65db50e9f8d6e", "text": "Six of the ten leading causes of death in the United States can be directly linked to diet. Measuring accurate dietary intake, the process of determining what someone eats is considered to be an open research problem in the nutrition and health fields. We are developing image-based tools in order to automatically obtain accurate estimates of what foods a user consumes. We have developed a novel food record application using the embedded camera in a mobile device. This paper describes the current status of food image analysis and overviews problems that still need to be addressed.", "title": "" }, { "docid": "81b5379abf3849e1ae4e233fd4955062", "text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.", "title": "" }, { "docid": "c11b77f1392c79f4a03f9633c8f97f4d", "text": "The paper introduces and discusses a concept of syntactic n-grams (sn-grams) that can be applied instead of traditional n-grams in many NLP tasks. Sn-grams are constructed by following paths in syntactic trees, so sngrams allow bringing syntactic knowledge into machine learning methods. Still, previous parsing is necessary for their construction. We applied sn-grams in the task of authorship attribution for corpora of three and seven authors with very promising results.", "title": "" } ]
scidocsrr
58a83c37bf4e499e68fdc64b63f2f55c
Online travel reviews as persuasive communication : The effects of content type , source , and certi fi cation logos on consumer behavior
[ { "docid": "032f5b66ae4ede7e26a911c9d4885b98", "text": "Are trust and risk important in consumers' electronic commerce purchasing decisions? What are the antecedents of trust and risk in this context? How do trust and risk affect an Internet consumer's purchasing decision? To answer these questions, we i) develop a theoretical framework describing the trust-based decision-making process a consumer uses when making a purchase from a given site, ii) test the proposed model using a Structural Equation Modeling technique on Internet consumer purchasing behavior data collected via a Web survey, and iii) consider the implications of the model. The results of the study show that Internet consumers' trust and perceived risk have strong impacts on their purchasing decisions. Consumer disposition to trust, reputation, privacy concerns, security concerns, the information quality of the Website, and the company's reputation, have strong effects on Internet consumers' trust in the Website. Interestingly, the presence of a third-party seal did not strongly influence consumers' trust. © 2007 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "99ea14010fe3acd37952fb355a25b71c", "text": "Today, as the increasing the amount of using internet, there are so most information interchanges are performed in that internet. So, the methods used as intrusion detective tools for protecting network systems against diverse attacks are became too important. The available of IDS are getting more powerful. Support Vector Machine was used as the classical pattern reorganization tools have been widely used for Intruder detections. There have some different characteristic of features in building an Intrusion Detection System. Conventional SVM do not concern about that. Our enhanced SVM Model proposed with an Recursive Feature Elimination (RFE) and kNearest Neighbor (KNN) method to perform a feature ranking and selection task of the new model. RFE can reduce redundant & recursive features and KNN can select more precisely than the conventional SVM. Experiments and comparisons are conducted through intrusion dataset: the KDD Cup 1999 dataset.", "title": "" }, { "docid": "3332bf8d62c1176b8f5f0aa2bb045d24", "text": "BACKGROUND\nInfectious mononucleosis caused by the Epstein-Barr virus has been associated with increased risk of multiple sclerosis. However, little is known about the characteristics of this association.\n\n\nOBJECTIVE\nTo assess the significance of sex, age at and time since infectious mononucleosis, and attained age to the risk of developing multiple sclerosis after infectious mononucleosis.\n\n\nDESIGN\nCohort study using persons tested serologically for infectious mononucleosis at Statens Serum Institut, the Danish Civil Registration System, the Danish National Hospital Discharge Register, and the Danish Multiple Sclerosis Registry.\n\n\nSETTING\nStatens Serum Institut.\n\n\nPATIENTS\nA cohort of 25 234 Danish patients with mononucleosis was followed up for the occurrence of multiple sclerosis beginning on April 1, 1968, or January 1 of the year after the diagnosis of mononucleosis or after a negative Paul-Bunnell test result, respectively, whichever came later and ending on the date of multiple sclerosis diagnosis, death, emigration, or December 31, 1996, whichever came first.\n\n\nMAIN OUTCOME MEASURE\nThe ratio of observed to expected multiple sclerosis cases in the cohort (standardized incidence ratio).\n\n\nRESULTS\nA total of 104 cases of multiple sclerosis were observed during 556,703 person-years of follow-up, corresponding to a standardized incidence ratio of 2.27 (95% confidence interval, 1.87-2.75). The risk of multiple sclerosis was persistently increased for more than 30 years after infectious mononucleosis and uniformly distributed across all investigated strata of sex and age. The relative risk of multiple sclerosis did not vary by presumed severity of infectious mononucleosis.\n\n\nCONCLUSIONS\nThe risk of multiple sclerosis is increased in persons with prior infectious mononucleosis, regardless of sex, age, and time since infectious mononucleosis or severity of infection. The risk of multiple sclerosis may be increased soon after infectious mononucleosis and persists for at least 30 years after the infection.", "title": "" }, { "docid": "9609d87c2e75b452495e7fb779a94027", "text": "Cyclophosphamide (CYC) has been the backbone immunosuppressive drug to achieve sustained remission in lupus nephritis (LN). The aim was to evaluate the efficacy and compare adverse effects of low and high dose intravenous CYC therapy in Indian patients with proliferative lupus nephritis. An open-label, parallel group, randomized controlled trial involving 75 patients with class III/IV LN was conducted after obtaining informed consent. The low dose group (n = 38) received 6 × 500 mg CYC fortnightly and high dose group (n = 37) received 6 × 750 mg/m2 CYC four-weekly followed by azathioprine. The primary outcome was complete/partial/no response at 52 weeks. The secondary outcomes were renal and non-renal flares and adverse events. Intention-to-treat analyses were performed. At 52 weeks, 27 (73%) in high dose group achieved complete/partial response (CR/PR) vs 19 (50%) in low dose (p = 0.04). CR was higher in the high dose vs low dose [24 (65%) vs 17 (44%)], although not statistically significant. Non-responders (NR) in the high dose group were also significantly lower 10 (27%) vs low dose 19 (50%) (p = 0.04). The change in the SLEDAI (Median, IQR) was also higher in the high dose 16 (7–20) in contrast to the low dose 10 (5.5–14) (p = 0.04). There was significant alopecia and CYC-induced leucopenia in high dose group. Renal relapses were significantly higher in the low dose group vs high dose [9 (24%) vs 1(3%), (p = 0.01)]. At 52 weeks, high dose CYC was more effective in inducing remission with decreased renal relapses in our population. Trial Registration: The study was registered at http://www.clintrials.gov. NCT02645565.", "title": "" }, { "docid": "a18e6f80284a96f680fb00cb3f0cc692", "text": "We demonstrate an 8-layer 3D Vertical Gate NAND Flash with WL half pitch =37.5nm, BL half pitch=75nm, 64-WL NAND string with 63% array core efficiency. This is the first time that a 3D NAND Flash can be successfully scaled to below 3Xnm half pitch in one lateral dimension, thus an 8-layer stack device already provides a very cost effective technology with lower cost than the conventional sub-20nm 2D NAND. Our new VG architecture has two key features: (1) To improve the manufacturability a new layout that twists the even/odd BL's (and pages) in the opposite direction (split-page BL) is adopted. This allows the island-gate SSL devices [1] and metal interconnections be laid out in double pitch, creating much larger process window for BL pitch scaling; (2) A novel staircase BL contact formation method using binary sum of only M lithography and etching steps to achieve 2M contacts. This not only allows precise landing of the tight-pitch staircase contacts, but also minimizes the process steps and cost. We have successfully fabricated an 8-layer array using TFT BE-SONOS charge-trapping device. The array characteristics including reading, programming, inhibit, and block erase are demonstrated.", "title": "" }, { "docid": "c1713b817c4b2ce6e134b6e0510a961f", "text": "BACKGROUND\nEntity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a large number of traditional machine learning methods, such as support vector machine and conditional random field, have been deployed to recognize entities from clinical texts in the past few years. In recent years, recurrent neural network (RNN), one of deep learning methods that has shown great potential on many problems including named entity recognition, also has been gradually used for entity recognition from clinical texts.\n\n\nMETHODS\nIn this paper, we comprehensively investigate the performance of LSTM (long-short term memory), a representative variant of RNN, on clinical entity recognition and protected health information recognition. The LSTM model consists of three layers: input layer - generates representation of each word of a sentence; LSTM layer - outputs another word representation sequence that captures the context information of each word in this sentence; Inference layer - makes tagging decisions according to the output of LSTM layer, that is, outputting a label sequence.\n\n\nRESULTS\nExperiments conducted on corpora of the 2010, 2012 and 2014 i2b2 NLP challenges show that LSTM achieves highest micro-average F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification, which is considerably competitive with other state-of-the-art systems.\n\n\nCONCLUSIONS\nLSTM that requires no hand-crafted feature has great potential on entity recognition from clinical texts. It outperforms traditional machine learning methods that suffer from fussy feature engineering. A possible future direction is how to integrate knowledge bases widely existing in the clinical domain into LSTM, which is a case of our future work. Moreover, how to use LSTM to recognize entities in specific formats is also another possible future direction.", "title": "" }, { "docid": "64bd2fc0d1b41574046340833144dabe", "text": "Probe-based confocal laser endomicroscopy (pCLE) provides high-resolution in vivo imaging for intraoperative tissue characterization. Maintaining a desired contact force between target tissue and the pCLE probe is important for image consistency, allowing large area surveillance to be performed. A hand-held instrument that can provide a predetermined contact force to obtain consistent images has been developed. The main components of the instrument include a linear voice coil actuator, a donut load-cell, and a pCLE probe. In this paper, detailed mechanical design of the instrument is presented and system level modeling of closed-loop force control of the actuator is provided. The performance of the instrument has been evaluated in bench tests as well as in hand-held experiments. Results demonstrate that the instrument ensures a consistent predetermined contact force between pCLE probe tip and tissue. Furthermore, it compensates for both simulated physiological movement of the tissue and involuntary movements of the operator's hand. Using pCLE video feature tracking of large colonic crypts within the mucosal surface, the steadiness of the tissue images obtained using the instrument force control is demonstrated by confirming minimal crypt translation.", "title": "" }, { "docid": "8318d49318f442749bfe3a33a3394f42", "text": "Driving Scene understanding is a key ingredient for intelligent transportation systems. To achieve systems that can operate in a complex physical and social environment, they need to understand and learn how humans drive and interact with traffic scenes. We present the Honda Research Institute Driving Dataset (HDD), a challenging dataset to enable research on learning driver behavior in real-life environments. The dataset includes 104 hours of real human driving in the San Francisco Bay Area collected using an instrumented vehicle equipped with different sensors. We provide a detailed analysis of HDD with a comparison to other driving datasets. A novel annotation methodology is introduced to enable research on driver behavior understanding from untrimmed data sequences. As the first step, baseline algorithms for driver behavior detection are trained and tested to demonstrate the feasibility of the proposed task.", "title": "" }, { "docid": "a11ed66e5368060be9585022db65c2ad", "text": "This article provides a historical context of evolutionary psychology and feminism, and evaluates the contributions to this special issue of Sex Roles within that context. We briefly outline the basic tenets of evolutionary psychology and articulate its meta-theory of the origins of gender similarities and differences. The article then evaluates the specific contributions: Sexual Strategies Theory and the desire for sexual variety; evolved standards of beauty; hypothesized adaptations to ovulation; the appeal of risk taking in human mating; understanding the causes of sexual victimization; and the role of studies of lesbian mate preferences in evaluating the framework of evolutionary psychology. Discussion focuses on the importance of social and cultural context, human behavioral flexibility, and the evidentiary status of specific evolutionary psychological hypotheses. We conclude by examining the potential role of evolutionary psychology in addressing social problems identified by feminist agendas.", "title": "" }, { "docid": "066fdb2deeca1d13218f16ad35fe5f86", "text": "As manga (Japanese comics) have become common content in many countries, it is necessary to search manga by text query or translate them automatically. For these applications, we must first extract texts from manga. In this paper, we develop a method to detect text regions in manga. Taking motivation from methods used in scene text detection, we propose an approach using classifiers for both connected components and regions. We have also developed a text region dataset of manga, which enables learning and detailed evaluations of methods used to detect text regions. Experiments using the dataset showed that our text detection method performs more effectively than existing methods.", "title": "" }, { "docid": "bd06f693359bba90de59454f32581c9c", "text": "Digital business ecosystems are becoming an increasingly popular concept as an open environment for modeling and building interoperable system integration. Business organizations have realized the importance of using standards as a cost-effective method for accelerating business process integration. Small and medium size enterprise (SME) participation in global trade is increasing, however, digital transactions are still at a low level. Cloud integration is expected to offer a cost-effective business model to form an interoperable digital supply chain. By observing the integration models, we can identify the large potential of cloud services to accelerate integration. An industrial case study is conducted. This paper investigates and contributes new knowledge on a how top-down approach by using a digital business ecosystem framework enables business managers to define new user requirements and functionalities for system integration. Through analysis, we identify the current cap of integration design. Using the cloud clustering framework, we identify how the design affects cloud integration services.", "title": "" }, { "docid": "59786d8ea951639b8b9a4e60c9d43a06", "text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.", "title": "" }, { "docid": "20d754528009ebce458eaa748312b2fe", "text": "This poster provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL framework, the agent tries to learn the expert policy whereas in the IRL framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view.", "title": "" }, { "docid": "2adde1812974f2d5d35d4c7e31ca7247", "text": "All currently available network intrusion detection (ID) systems rely upon a mechanism of data collection---passive protocol analysis---which is fundamentally flawed. In passive protocol analysis, the intrusion detection system (IDS) unobtrusively watches all traffic on the network, and scrutinizes it for patterns of suspicious activity. We outline in this paper two basic problems with the reliability of passive protocol analysis: (1) there isn't enough information on the wire on which to base conclusions about what is actually happening on networked machines, and (2) the fact that the system is passive makes it inherently \"fail-open,\" meaning that a compromise in the availability of the IDS doesn't compromise the availability of the network. We define three classes of attacks which exploit these fundamental problems---insertion, evasion, and denial of service attacks --and describe how to apply these three types of attacks to IP and TCP protocol analysis. We present the results of tests of the efficacy of our attacks against four of the most popular network intrusion detection systems on the market. All of the ID systems tested were found to be vulnerable to each of our attacks. This indicates that network ID systems cannot be fully trusted until they are fundamentally redesigned. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection http://www.robertgraham.com/mirror/Ptacek-Newsham-Evasion-98.html (1 of 55) [17/01/2002 08:32:46 p.m.]", "title": "" }, { "docid": "8caaea6ffb668c019977809773a6d8c5", "text": "In the past several years, a number of different language modeling improvements over simple trigram models have been found, including caching, higher-order n-grams, skipping, interpolated Kneser–Ney smoothing, and clustering. We present explorations of variations on, or of the limits of, each of these techniques, including showing that sentence mixture models may have more potential. While all of these techniques have been studied separately, they have rarely been studied in combination. We compare a combination of all techniques together to a Katz smoothed trigram model with no count cutoffs. We achieve perplexity reductions between 38 and 50% (1 bit of entropy), depending on training data size, as well as a word error rate reduction of 8 .9%. Our perplexity reductions are perhaps the highest reported compared to a fair baseline. c © 2001 Academic Press", "title": "" }, { "docid": "23a5d1aebe5e2f7dd5ed8dfde17ce374", "text": "Today's workplace often includes workers from 4 distinct generations, and each generation brings a unique set of core values and characteristics to an organization. These generational differences can produce benefits, such as improved patient care, as well as challenges, such as conflict among employees. This article reviews current research on generational differences in educational settings and the workplace and discusses the implications of these findings for medical imaging and radiation therapy departments.", "title": "" }, { "docid": "b317f33d159bddce908df4aa9ba82cf9", "text": "Point cloud source data for surface reconstruction is usually contaminated with noise and outliers. To overcome this deficiency, a density-based point cloud denoising method is presented to remove outliers and noisy points. First, particle-swam optimization technique is employed for automatically approximating optimal bandwidth of multivariate kernel density estimation to ensure the robust performance of density estimation. Then, mean-shift based clustering technique is used to remove outliers through a thresholding scheme. After removing outliers from the point cloud, bilateral mesh filtering is applied to smooth the remaining points. The experimental results show that this approach, comparably, is robust and efficient.", "title": "" }, { "docid": "b6ff96922a0b8e32236ba8fb44bf4888", "text": "Most people acknowledge that personal computers have enormously enhanced the autonomy and communication capacity of people with special needs. The key factor for accessibility to these opportunities is the adequate design of the user interface which, consequently, has a high impact on the social lives of users with disabilities. The design of universally accessible interfaces has a positive effect over the socialisation of people with disabilities. People with sensory disabilities can profit from computers as a way of personal direct and remote communication. Personal computers can also assist people with severe motor impairments to manipulate their environment and to enhance their mobility by means of, for example, smart wheelchairs. In this way they can become more socially active and productive. Accessible interfaces have become so indispensable for personal autonomy and social inclusion that in several countries special legislation protects people from ‘digital exclusion’. To apply this legislation, inexperienced HCI designers can experience difficulties. They would greatly benefit from inclusive design guidelines in order to be able to implement the ‘design for all’ philosophy. In addition, they need clear criteria to avoid negative social and ethical impact on users. This paper analyses the benefits of the use of inclusive design guidelines in order to facilitate a universal design focus so that social exclusion is avoided. In addition, the need for ethical and social guidelines in order to avoid undesirable side effects for users is discussed. Finally, some preliminary examples of socially and ethically aware guidelines are proposed. q 2005 Elsevier B.V. All rights reserved. Interacting with Computers 17 (2005) 484–505 www.elsevier.com/locate/intcom 0953-5438/$ see front matter q 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.intcom.2005.03.002 * Corresponding author. E-mail address: julio.abascal@ehu.es (J. Abascal). J. Abascal, C. Nicolle / Interacting with Computers 17 (2005) 484–505 485 1. HCI and people with disabilities Most people living in developed countries have direct or indirect relationships with computers in diverse ways. In addition, there exist many tasks that could hardly be performed without computers, leading to a dependence on Information Technology. Moreover, people not having access to computers can suffer the effects of the so-called digital divide (Fitch, 2002), a new type of social exclusion. People with disabilities are one of the user groups with higher computer dependence because, for many of them, the computer is the only way to perform several vital tasks, such as personal and remote communication, control of the environment, assisted mobility, access to telematic networks and services, etc. Digital exclusion for disabled people means not having full access to a socially active and independent lifestyle. In this way, Human-Computer Interaction (HCI) is playing an important role in the provision of social opportunities to people with disabilities (Abascal and Civit, 2002). 2. HCI and social integration 2.1. Gaining access to computers Computers provide very effective solutions to help people with disabilities to enhance their social integration. For instance, people with severe speech and motor impairments have serious difficulties to communicate with other people and to perform common operations in their close environment (e.g. to handle objects). For them, computers are incredibly useful as alternative communication devices. Messages can be composed using special keyboards (Lesher et al., 1998), scanning with one or two switches, by means of eye tracking (Sibert and Jacob, 2000), etc. Current software techniques also allow the design of methods to enhance the message composition speed. For instance, Artificial Intelligence methods are frequently used to design word prediction aids to assist in the typing of text with minimum effort (Garay et al., 1997). Computers can also assist the disabled user to autonomously control the environment through wireless communication, to drive smart electric powered wheelchairs, to control assistive robotic arms, etc. What is more, the integration of all of these services allows people with disabilities using the same interface to perform all tasks in a similar way (Abascal and Civit, 2001a). This is possible because assistive technologists have devoted much effort to providing disabled people with devices and procedures to enhance or substitute their physical and cognitive functions in order to be able to gain access to computers (Cook and Hussey, 2002). 2.2. Using commercial software When the need of gaining access to a PC is solved, the user faces another problem due to difficulties in using commercial software. Many applications have been designed without taking into account that they can be used by people using Assistive Technology devices, and therefore they may have unnecessary barriers which impede the use of alternative interaction devices. J. Abascal, C. Nicolle / Interacting with Computers 17 (2005) 484–505 486 This is the case for one of the most promising application fields nowadays: the internet. A PC linked to a telematic network opens the door to new remote services that can be crucial for people with disabilities. Services such us tele-teaching, tele-care, tele-working, tele-shopping, etc., may enormously enhance their quality of life. These are just examples of the great interest of gaining access to services provided by means of computers for people with disabilities. However, if these services are not accessible, they are useless for people with disabilities. In addition, even if the services are accessible, that is, the users can actually perform the tasks they wish to, it is also important that users can perform those tasks easily, effectively and efficiently. Usability, therefore, is also a key requirement. 2.3. Social demand for accessibility and usability Two factors, among others, have greatly influenced the social demand for accessible computing. The first factor was the technological revolution produced by the availability of personal computers that became smaller, cheaper, lower in consumption, and easier to use than previous computing machines. In parallel, a social revolution has evolved as a result of the battle against social exclusion ever since disabled people became conscious of their rights and needs. The conjunction of computer technology in the form of inexpensive and powerful personal computers, with the struggle of people with disabilities towards autonomous life and social integration, produced the starting point of a new technological challenge. This trend has been also supported in some countries by laws that prevent technological exclusion of people with disabilities and favour the inclusive use of technology (e.g. the Americans with Disabilities Act in the United States and the Disability Discrimination Act in the United Kingdom). The next sections discuss how this situation influenced the design of user interfaces for people with disabilities. 3. User interfaces for people with disabilities With the popularity of personal computers many technicians realised that they could become an indispensable tool to assist people with disabilities for most necessary tasks. They soon discovered that a key issue was the availability of suitable user interfaces, due to the special requirements of these users. But the variety of needs and the wide diversity of physical, sensory and cognitive characteristics make the design of interfaces very complex. An interesting process has occurred whereby we have moved from a computer ‘patchwork’ situation to the adoption of more structured HCI methodologies. In the next sections, this process is briefly described, highlighting issues that can and should lead to inclusive design guidelines for socially and ethically aware HCI. 1 Americans with Disabilities Act (ADA). Available at http://www.usdoj.gov/crt/ada/adahom1.htm, last accessed January 15, 2005. 2 Disabilty Discrimination Act (DDA). Available at http://www.disability.gov.uk/dda/index.html, last accessed January 15, 2005. J. Abascal, C. Nicolle / Interacting with Computers 17 (2005) 484–505 487 3.1. First approach: adaptation of existing systems For years, the main activity of people working in Assistive Technology was the adaptation of commercially available computers to the capabilities of users with disabilities. Existing computer interaction style was mainly based on a standard keyboard and mouse for input, and output was based on a screen for data, a printer for hard copy, and a ‘bell’ for some warnings and signals. This kind of interface takes for granted the fact that users have the following physical skills: enough sight capacity to read the screen, movement control and strength in the hands to handle the standard keyboard, coordination for mouse use, and also hearing capacity for audible warnings. In addition, cognitive capabilities to read, understand, reason, etc., were also assumed. When one or more of these skills were lacking, conscientious designers would try to substitute them by another capability, or an alternative way of communication. For instance, blind users could hear the content of the screen when it was read aloud by a textto-voice translator. Alternatively, output could be directed to a Braille printer, or matrix of pins. Thus, adaptation was done in the following way: first, detecting the barriers to gain access to the computer by a user or a group of users, and then, providing them with an alternative way based on the abilities and skills present in this group of users. This procedure often succeeded, producing very useful alternative ways to use computers. Nevertheless, some drawbacks were detected: † Lack of generality: the smaller the group of users the design is focused on, the better results were obtained. Therefore, different systems had to be designed to fit the needs of us", "title": "" }, { "docid": "d72092cd909d88e18598925024dc6b97", "text": "This paper focuses on the robust dissipative fault-tolerant control problem for one kind of Takagi-Sugeno (T-S) fuzzy descriptor system with actuator failures. The solvable conditions of the robust dissipative fault-tolerant controller are given by using of the Lyapunov theory, Lagrange interpolation polynomial theory, etc. These solvable conditions not only make the closed loop system dissipative, but also integral for the actuator failure situation. The dissipative fault-tolerant controller design methods are given by the aid of the linear matrix inequality toolbox, the function of randomly generated matrix, loop statement, and numerical solution, etc. Thus, simulation process is fully intelligent and efficient. At the same time, the design methods are also obtained for the passive and H∞ fault-tolerant controllers. This explains the fact that the dissipative control unifies H∞ control and passive control. Finally, we give example that illustrates our results.", "title": "" }, { "docid": "446a7404a0e4e78156532fcb93270475", "text": "Convolutional Neural Networks (CNNs) can provide accurate object classification. They can be extended to perform object detection by iterating over dense or selected proposed object regions. However, the runtime of such detectors scales as the total number and/or area of regions to examine per image, and training such detectors may be prohibitively slow. However, for some CNN classifier topologies, it is possible to share significant work among overlapping regions to be classified. This paper presents DenseNet, an open source system that computes dense, multiscale features from the convolutional layers of a CNN based object classifier. Future work will involve training efficient object detectors with DenseNet feature descriptors.", "title": "" }, { "docid": "14f539b7c27aeb96025045a660416e39", "text": "This paper describes a method for the automatic self-calibration of a 3D Laser sensor. We wish to acquire crisp point clouds and so we adopt a measure of crispness to capture point cloud quality. We then pose the calibration problem as the task of maximising point cloud quality. Concretely, we use Rényi Quadratic Entropy to measure the degree of organisation of a point cloud. By expressing this quantity as a function of key unknown system parameters, we are able to deduce a full calibration of the sensor via an online optimisation. Beyond details on the sensor design itself, we fully describe the end-to-end intrinsic parameter calibration process and the estimation of the clock skews between the constituent microprocessors. We analyse performance using real and simulated data and demonstrate robust performance over thirty test sites.", "title": "" } ]
scidocsrr
2c6848e03b871a46c9228a2951dc7f4f
Analysis of Social Networks Using the Techniques of Web Mining
[ { "docid": "bed9bdf4d4965610b85378f2fdbfab2a", "text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.", "title": "" } ]
[ { "docid": "ed9f79cab2dfa271ee436b7d6884bc13", "text": "This study conducts a phylogenetic analysis of extant African papionin craniodental morphology, including both quantitative and qualitative characters. We use two different methods to control for allometry: the previously described narrow allometric coding method, and the general allometric coding method, introduced herein. The results of this study strongly suggest that African papionin phylogeny based on molecular systematics, and that based on morphology, are congruent and support a Cercocebus/Mandrillus clade as well as a Papio/Lophocebus/Theropithecus clade. In contrast to previous claims regarding papionin and, more broadly, primate craniodental data, this study finds that such data are a source of valuable phylogenetic information and removes the basis for considering hard tissue anatomy \"unreliable\" in phylogeny reconstruction. Among highly sexually dimorphic primates such as papionins, male morphologies appear to be particularly good sources of phylogenetic information. In addition, we argue that the male and female morphotypes should be analyzed separately and then added together in a concatenated matrix in future studies of sexually dimorphic taxa. Character transformation analyses identify a series of synapomorphies uniting the various papionin clades that, given a sufficient sample size, should potentially be useful in future morphological analyses, especially those involving fossil taxa.", "title": "" }, { "docid": "6614eeffe9fb332a028b1e80aa24016a", "text": "Advances in microelectronics, array processing, and wireless networking, have motivated the analysis and design of low-cost integrated sensing, computating, and communicating nodes capable of performing various demanding collaborative space-time processing tasks. In this paper, we consider the problem of coherent acoustic sensor array processing and localization on distributed wireless sensor networks. We first introduce some basic concepts of beamforming and localization for wideband acoustic sources. A review of various known localization algorithms based on time-delay followed by LS estimations as well as maximum likelihood method is given. Issues related to practical implementation of coherent array processing including the need for fine-grain time synchronization are discussed. Then we describe the implementation of a Linux-based wireless networked acoustic sensor array testbed, utilizing commercially available iPAQs with built in microphones, codecs, and microprocessors, plus wireless Ethernet cards, to perform acoustic source localization. Various field-measured results using two localization algorithms show the effectiveness of the proposed testbed. An extensive list of references related to this work is also included. Keywords— Beamforming, Source Localization, Distributed Sensor Network, Wireless Network, Ad Hoc Network, Microphone Array, Time Synchronization.", "title": "" }, { "docid": "805583da675c068b7cc2bca80e918963", "text": "Designing an actuator system for highly dynamic legged robots has been one of the grand challenges in robotics research. Conventional actuators for manufacturing applications have difficulty satisfying design requirements for high-speed locomotion, such as the need for high torque density and the ability to manage dynamic physical interactions. To address this challenge, this paper suggests a proprioceptive actuation paradigm that enables highly dynamic performance in legged machines. Proprioceptive actuation uses collocated force control at the joints to effectively control contact interactions at the feet under dynamic conditions. Modal analysis of a reduced leg model and dimensional analysis of DC motors address the main principles for implementation of this paradigm. In the realm of legged machines, this paradigm provides a unique combination of high torque density, high-bandwidth force control, and the ability to mitigate impacts through backdrivability. We introduce a new metric named the “impact mitigation factor” (IMF) to quantify backdrivability at impact, which enables design comparison across a wide class of robots. The MIT Cheetah leg is presented, and is shown to have an IMF that is comparable to other quadrupeds with series springs to handle impact. The design enables the Cheetah to control contact forces during dynamic bounding, with contact times down to 85 ms and peak forces over 450 N. The unique capabilities of the MIT Cheetah, achieving impact-robust force-controlled operation in high-speed three-dimensional running and jumping, suggest wider implementation of this holistic actuation approach.", "title": "" }, { "docid": "c2b41a637cdc46abf0e154368a5990df", "text": "Ideally, the time that an incremental algorithm uses to process a change should be a fimction of the size of the change rather than, say, the size of the entire current input. Based o n a formalization of \"the set of things changed\" by an increInental modification, this paper investigates how and to what extent it is possibh~' to give such a guarantee for a chart-ba.se(l parsing frmnework and discusses the general utility of a tninlmality notion in incremental processing) 1 I n t r o d u c t i o n", "title": "" }, { "docid": "cd1a5d05e1991accd0a733ae0f2b7afc", "text": "This paper presents the application of an embedded camera system for detecting laser spot in the shooting simulator. The proposed shooting simulator uses a specific target box, where the circular pattern target is mounted. The embedded camera is installed inside the box to capture the circular pattern target and laser spot image. To localize the circular pattern automatically, two colored solid circles are painted on the target. This technique allows the simple and fast color tracking to track the colored objects for localizing the circular pattern. The CMUCam4 is employed as the embedded camera. It is able to localize the target and detect the laser spot in real-time at 30 fps. From the experimental results, the errors in calculating shooting score and detecting laser spot are 3.82% and 0.68% respectively. Further the proposed system provides the more accurate scoring system in real number compared to the conventional integer number.", "title": "" }, { "docid": "691f5f53582ceedaa51812307778b4db", "text": "This paper looks at how a vulnerability management (VM) process could be designed & implemented within an organization. Articles and studies about VM usually focus mainly on the technology aspects of vulnerability scanning. The goal of this study is to call attention to something that is often overlooked: a basic VM process which could be easily adapted and implemented in any part of the organization. Implementing a vulnerability management process 2 Tom Palmaers", "title": "" }, { "docid": "867516a6a54105e4759338e407bafa5a", "text": "At the end of the criminal intelligence analysis process there are relatively well established and understood approaches to explicit externalisation and representation of thought that include theories of argumentation, narrative and hybrid approaches that include both of these. However the focus of this paper is on the little understood area of how to support users in the process of arriving at such representations from an initial starting point where little is given. The work is based on theoretical considerations and some initial studies with end users. In focusing on process we discuss the requirements of fluidity and rigor and how to gain traction in investigations, the processes of thinking involved including abductive, deductive and inductive reasoning, how users may use thematic sorting in early stages of investigation and how tactile reasoning may be used to externalize and facilitate reasoning in a productive way. In the conclusion section we discuss the issues raised in this work and directions for future work.", "title": "" }, { "docid": "0cd42818f21ada2a8a6c2ed7a0f078fe", "text": "In perceiving objects we may synthesize conjunctions of separable features by directing attention serially to each item in turn (A. Treisman and G. Gelade, Cognitive Psychology, 1980, 12, 97136). This feature-integration theory predicts that when attention is diverted or overloaded, features may be wrongly recombined, giving rise to “illusory conjunctions.” The present paper confirms that illusory conjunctions are frequently experienced among unattended stimuli varying in color and shape, and that they occur also with size and solidity (outlined versus filled-in shapes). They are shown both in verbal recall and in simultaneous and successive matching tasks, making it unlikely that they depend on verbal labeling or on memory failure. They occur as often between stimuli differing on many features as between more similar stimuli, and spatial separation has little effect on their frequency. Each feature seems to be coded as an independent entity and to migrate, when attention is diverted, with few constraints from the other features of its source or destination.", "title": "" }, { "docid": "1d0d5ad5371a3f7b8e90fad6d5299fa7", "text": "Vascularization of embryonic organs or tumors starts from a primitive lattice of capillaries. Upon perfusion, this lattice is remodeled into branched arteries and veins. Adaptation to mechanical forces is implied to play a major role in arterial patterning. However, numerical simulations of vessel adaptation to haemodynamics has so far failed to predict any realistic vascular pattern. We present in this article a theoretical modeling of vascular development in the yolk sac based on three features of vascular morphogenesis: the disconnection of side branches from main branches, the reconnection of dangling sprouts (\"dead ends\"), and the plastic extension of interstitial tissue, which we have observed in vascular morphogenesis. We show that the effect of Poiseuille flow in the vessels can be modeled by aggregation of random walkers. Solid tissue expansion can be modeled by a Poiseuille (parabolic) deformation, hence by deformation under hits of random walkers. Incorporation of these features, which are of a mechanical nature, leads to realistic modeling of vessels, with important biological consequences. The model also predicts the outcome of simple mechanical actions, such as clamping of vessels or deformation of tissue by the presence of obstacles. This study offers an explanation for flow-driven control of vascular branching morphogenesis.", "title": "" }, { "docid": "024e4eebc8cb23d85676df920316f62c", "text": "E-voting technology has been developed for more than 30 years. However it is still distance away from serious application. The major challenges are to provide a secure solution and to gain trust from the voters in using it. In this paper we try to present a comprehensive review to e-voting by looking at these challenges. We summarized the vast amount of security requirements named in the literature that allows researcher to design a secure system. We reviewed some of the e-voting systems found in the real world and the literature. We also studied how a e-voting system can be usable by looking at different usability research conducted on e-voting. Summarizes on different cryptographic tools in constructing e-voting systems are also presented in the paper. We hope this paper can served as a good introduction for e-voting researches.", "title": "" }, { "docid": "22cdfb6170fab44905a8f79b282a1313", "text": "CONTEXT\nInteprofessional collaboration (IPC) between biomedically trained doctors (BMD) and traditional, complementary and alternative medicine practitioners (TCAMP) is an essential element in the development of successful integrative healthcare (IHC) services. This systematic review aims to identify organizational strategies that would facilitate this process.\n\n\nMETHODS\nWe searched 4 international databases for qualitative studies on the theme of BMD-TCAMP IPC, supplemented with a purposive search of 31 health services and TCAM journals. Methodological quality of included studies was assessed using published checklist. Results of each included study were synthesized using a framework approach, with reference to the Structuration Model of Collaboration.\n\n\nFINDINGS\nThirty-seven studies of acceptable quality were included. The main driver for developing integrative healthcare was the demand for holistic care from patients. Integration can best be led by those trained in both paradigms. Bridge-building activities, positive promotion of partnership and co-location of practices are also beneficial for creating bonding between team members. In order to empower the participation of TCAMP, the perceived power differentials need to be reduced. Also, resources should be committed to supporting team building, collaborative initiatives and greater patient access. Leadership and funding from central authorities are needed to promote the use of condition-specific referral protocols and shared electronic health records. More mature IHC programs usually formalize their evaluation process around outcomes that are recognized both by BMD and TCAMP.\n\n\nCONCLUSIONS\nThe major themes emerging from our review suggest that successful collaborative relationships between BMD and TCAMP are similar to those between other health professionals, and interventions which improve the effectiveness of joint working in other healthcare teams with may well be transferable to promote better partnership between the paradigms. However, striking a balance between the different practices and preserving the epistemological stance of TCAM will remain the greatest challenge in successful integration.", "title": "" }, { "docid": "b3af820192d34b6066498e04b9a51e31", "text": "Nowadays there are studies in different fields aimed to extract relevant information on trends, challenges and opportunities; all these studies have something in common: they work with large volumes of data. This work analyzes different studies carried out on the use of Machine Learning (ML) for processing large volumes of data (Big Data). Most of these datasets, are complex and come from various sources with structured or unstructured data. For this reason, it is necessary to find mechanisms that allow classification and, in a certain way, organize them to facilitate to the users the extraction of the required information. The processing of these data requires the use of classification techniques that will also be reviewed.", "title": "" }, { "docid": "10b7ce647229f3c9fe5aeced5be85e38", "text": "The proliferation of deep learning methods in natural language processing (NLP) and the large amounts of data they often require stands in stark contrast to the relatively data-poor clinical NLP domain. In particular, large text corpora are necessary to build high-quality word embeddings, yet often large corpora that are suitably representative of the target clinical data are unavailable. This forces a choice between building embeddings from small clinical corpora and less representative, larger corpora. This paper explores this trade-off, as well as intermediate compromise solutions. Two standard clinical NLP tasks (the i2b2 2010 concept and assertion tasks) are evaluated with commonly used deep learning models (recurrent neural networks and convolutional neural networks) using a set of six corpora ranging from the target i2b2 data to large open-domain datasets. While combinations of corpora are generally found to work best, the single-best corpus is generally task-dependent.", "title": "" }, { "docid": "f02bd91e8374506aa4f8a2107f9545e6", "text": "In an online survey with two cohorts (2009 and 2011) of undergraduates in dating relationshi ps, we examined how attachment was related to communication technology use within romantic relation ships. Participants reported on their attachment style and frequency of in-person communication as well as phone, text messaging, social network site (SNS), and electronic mail usage with partners. Texting and SNS communication were more frequent in 2011 than 2009. Attachment avoidance was related to less frequent phone use and texting, and greater email usage. Electronic communication channels (phone and texting) were related to positive relationship qualities, however, once accounting for attachment, only moderated effects were found. Interactions indicated texting was linked to more positive relationships for highly avoidant (but not less avoidant) participants. Additionally, email use was linked to more conflict for highly avoidant (but not less avoidant) participants. Finally, greater use of a SNS was positively associated with intimacy/support for those higher (but not lower) on attachment anxiety. This study illustrates how attachment can help to explain why the use of specific technology-based communication channels within romantic relationships may mean different things to different people, and that certain channels may be especially relevant in meeting insecurely attached individuals’ needs. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7bbfafb6de6ccd50a4a708af76588beb", "text": "In this paper we present a system for mobile augmented reality (AR) based on visual recognition. We split the tasks of recognizing an object and tracking it on the user's screen into a server-side and a client-side task, respectively. The capabilities of this hybrid client-server approach are demonstrated with a prototype application on the Android platform, which is able to augment both stationary (landmarks) and non stationary (media covers) objects. The database on the server side consists of hundreds of thousands of landmarks, which is crawled using a state of the art mining method for community photo collections. In addition to the landmark images, we also integrate a database of media covers with millions of items. Retrieval from these databases is done using vocabularies of local visual features. In order to fulfill the real-time constraints for AR applications, we introduce a method to speed-up geometric verification of feature matches. The client-side tracking of recognized objects builds on a multi-modal combination of visual features and sensor measurements. Here, we also introduce a motion estimation method, which is more efficient and precise than similar approaches. To the best of our knowledge this is the first system, which demonstrates a complete pipeline for augmented reality on mobile devices with visual object recognition scaled to millions of objects combined with real-time object tracking.", "title": "" }, { "docid": "30e287e44e66e887ad5d689657e019c3", "text": "OBJECTIVE\nThe purpose of this study was to determine whether the Sensory Profile discriminates between children with and without autism and which items on the profile best discriminate between these groups.\n\n\nMETHOD\nParents of 32 children with autism aged 3 to 13 years and of 64 children without autism aged 3 to 10 years completed the Sensory Profile. A descriptive analysis of the data set of children with autism identified the distribution of responses on each item. A multivariate analysis of covariance (MANCOVA) of each category of the Sensory Profile identified possible differences among subjects without autism, with mild or moderate autism, and with severe autism. Follow-up univariate analyses were conducted for any category that yielded a significant result on the MANCOVA:\n\n\nRESULTS\nEight-four of 99 items (85%) on the Sensory Profile differentiated the sensory processing skills of subjects with autism from those without autism. There were no group differences between subjects with mild or moderate autism and subjects with severe autism.\n\n\nCONCLUSION\nThe Sensory Profile can provide information about the sensory processing skills of children with autism to assist occupational therapists in assessing and planning intervention for these children.", "title": "" }, { "docid": "510439267c11c53b31dcf0b1c40e331b", "text": "Spatial multicriteria decision problems are decision problems where one needs to take multiple conflicting criteria as well as geographical knowledge into account. In such a context, exploratory spatial analysis is known to provide tools to visualize as much data as possible on maps but does not integrate multicriteria aspects. Also, none of the tools provided by multicriteria analysis were initially destined to be used in a geographical context.In this paper, we propose an application of the PROMETHEE and GAIA ranking methods to Geographical Information Systems (GIS). The aim is to help decision makers obtain rankings of geographical entities and understand why such rankings have been obtained. To do that, we make use of the visual approach of the GAIA method and adapt it to display the results on geographical maps. This approach is then extended to cover several weaknesses of the adaptation. Finally, it is applied to a study of the region of Brussels as well as an evaluation of the Human Development Index (HDI) in Europe.", "title": "" }, { "docid": "09fc272a6d9ea954727d07075ecd5bfd", "text": "Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment.", "title": "" }, { "docid": "63063c0a2b08f068c11da6d80236fa87", "text": "This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the hallucinated HR details by using dynamic texture synthesis (DTS). Most existing multi-frame-based video super-resolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate sub-pixel motion estimation between frames in a LR video. To achieve high-quality reconstruction of HR details for a LR video, we propose a texture-synthesis-based video super-resolution method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a time coherent way, so as to effectively address the temporal incoherence problem caused by traditional texture synthesis based image SR methods. To further reduce the complexity of the proposed method, our method only performs the DTS-based SR on a selected set of key-frames, while the HR details of the remaining non-key-frames are simply predicted using the bi-directional overlapped block motion compensation. Experimental results demonstrate that the proposed method achieves significant subjective and objective quality improvement over state-of-the-art video SR methods.", "title": "" } ]
scidocsrr
40304cb4069dcd4e8e12cb2d1d782d2e
Classification using Machine Learning Techniques
[ { "docid": "1c89b9927bd5e81c53a9896cd3122b92", "text": "The whole world is changed rapidly and using the current technologies Internet becomes an essential need for everyone. Web is used in every field. Most of the people use web for a common purpose like online shopping, chatting etc. During an online shopping large number of reviews/opinions are given by the users that reflect whether the product is good or bad. These reviews need to be explored, analyse and organized for better decision making. Opinion Mining is a natural language processing task that deals with finding orientation of opinion in a piece of text with respect to a topic. In this paper a document based opinion mining system is proposed that classify the documents as positive, negative and neutral. Negation is also handled in the proposed system. Experimental results using reviews of movies show the effectiveness of the system.", "title": "" } ]
[ { "docid": "014ff12b51ce9f4399bca09e0dedabed", "text": "The crystallographic preferred orientation (CPO) of olivine produced during dislocation creep is considered to be the primary cause of elastic anisotropy in Earth’s upper mantle and is often used to determine the direction of mantle flow. A fundamental question remains, however, as to whether the alignment of olivine crystals is uniquely produced by dislocation creep. Here we report the development of CPO in iron-free olivine (that is, forsterite) during diffusion creep; the intensity and pattern of CPO depend on temperature and the presence of melt, which control the appearance of crystallographic planes on grain boundaries. Grain boundary sliding on these crystallography-controlled boundaries accommodated by diffusion contributes to grain rotation, resulting in a CPO. We show that strong radial anisotropy is anticipated at temperatures corresponding to depths where melting initiates to depths where strongly anisotropic and low seismic velocities are detected. Conversely, weak anisotropy is anticipated at temperatures corresponding to depths where almost isotropic mantle is found. We propose diffusion creep to be the primary means of mantle flow.", "title": "" }, { "docid": "aa0e52963f4fab6db73df79a16fb40aa", "text": "GENTNER, DEDRE. Metaphor as Structure Mapping: The Relational Shift. CHILD DEVELOPMENT, 1988, 59, 47-59. The goal of this research is to clarify the development of metaphor by using structure-mapping theory to make distinctions among kinds of metaphors. In particular, it is proposed that children can understand metaphors based on shared object attributes before those based on shared relational structure. This predicts (1) early ability to interpret metaphors based on shared attributes, (2 ) a developmental increase in ability to interpret metaphors based on shared relational structure, and (3) a shift from primarily attributional to primarily relational interpretations for metaphors that can be understood in either way. 2 experiments were performed to test these claims. There were 3 kinds of metaphors, varying in whether the shared information forming the basis for the interpretation was attributional, relational, or both. In Experiment 1, children aged 5-6 and 910 and adults produced interpretations of the 3 types of metaphors. The attributionality and relationality of their interpretations were scored by independent judges. In Experiment 2, children aged 45 and 7-8 and adults chose which of 2 interpretations-relational or attributional-of a metaphor they preferred. In both experiments, relational responding increased significantly with age, but attributional responding did not. These results indicate a developmental shift toward a focus on relational structure in metaphor interpretation.", "title": "" }, { "docid": "fb6068d738c7865d07999052750ff6a8", "text": "Malware detection and prevention methods are increasingly becoming necessary for computer systems connected to the Internet. The traditional signature based detection of malware fails for metamorphic malware which changes its code structurally while maintaining functionality at time of propagation. This category of malware is called metamorphic malware. In this paper we dynamically analyze the executables produced from various metamorphic generators through an emulator by tracing API calls. A signature is generated for an entire malware class (each class representing a family of viruses generated from one metamorphic generator) instead of for individual malware sample. We show that most of the metamorphic viruses of same family are detected by the same base signature. Once a base signature for a particular metamorphic generator is generated, all the metamorphic viruses created from that tool are easily detected by the proposed method. A Proximity Index between the various Metamorphic generators has been proposed to determine how similar two or more generators are.", "title": "" }, { "docid": "b18f98cfad913ebf3ce1780b666277cb", "text": "Deep convolutional neural network (DCNN) has achieved remarkable performance on object detection and speech recognition in recent years. However, the excellent performance of a DCNN incurs high computational complexity and large memory requirement In this paper, an equal distance nonuniform quantization (ENQ) scheme and a K-means clustering nonuniform quantization (KNQ) scheme are proposed to reduce the required memory storage when low complexity hardware or software implementations are considered. For the VGG-16 and the AlexNet, the proposed nonuniform quantization schemes reduce the number of required memory storage by approximately 50% while achieving almost the same or even better classification accuracy compared to the state-of-the-art quantization method. Compared to the ENQ scheme, the proposed KNQ scheme provides a better tradeoff when higher accuracy is required.", "title": "" }, { "docid": "999f30cbd208bc7d262de954d29dcd39", "text": "Purpose\nThe purpose of the study was to determine the sensitivity and specificity, and to establish cutoff points for the severity index Percentage of Consonants Correct - Revised (PCC-R) in Brazilian Portuguese-speaking children with and without speech sound disorders.\n\n\nMethods\n72 children between 5:00 and 7:11 years old - 36 children without speech and language complaints and 36 children with speech sound disorders. The PCC-R was applied to the figure naming and word imitation tasks that are part of the ABFW Child Language Test. Results were statistically analyzed. The ROC curve was performed and sensitivity and specificity values ​​of the index were verified.\n\n\nResults\nThe group of children without speech sound disorders presented greater PCC-R values in both tasks, regardless of the gender of the participants. The cutoff value observed for the picture naming task was 93.4%, with a sensitivity value of 0.89 and specificity of 0.94 (age independent). For the word imitation task, results were age-dependent: for age group ≤6:5 years old, the cutoff value was 91.0% (sensitivity of 0.77 and specificity of 0.94) and for age group >6:5 years-old, the cutoff value was 93.9% (sensitivity of 0.93 and specificity of 0.94).\n\n\nConclusion\nGiven the high sensitivity and specificity of PCC-R, we can conclude that the index was effective in discriminating and identifying children with and without speech sound disorders.", "title": "" }, { "docid": "d2f929806163b2be07c57f0b34fdb3da", "text": "This article reviews the use of robotic technology for otolaryngologic surgery. The authors discuss the development of the technology and its current uses in the operating room. They address procedures such as oropharyngeal transoral robotic surgery (TORS), laryngeal TORS, and thyroidectomy, and also note the role of robotics in teaching.", "title": "" }, { "docid": "7f83946dd7d9869aa49bed57107c2870", "text": "A study of wireless technologies for IoT applications in terms of power consumption has been presented in this paper. The study focuses on the importance of using low power wireless techniques and modules in IoT applications by introducing a comparative between different low power wireless communication techniques such as ZigBee, Low Power Wi-Fi, 6LowPAN, LPWA and their modules to conserve power and longing the life for the IoT network sensors. The approach of the study is in term of protocol used and the particular module that achieve that protocol. The candidate protocols are classified according to the range of connectivity between sensor nodes. For short ranges connectivity the candidate protocols are ZigBee, 6LoWPAN and low power Wi-Fi. For long connectivity the candidate is LoRaWAN protocol. The results of the study demonstrate that the choice of module for each protocol plays a vital role in battery life due to the difference of power consumption for each module/protocol. So, the evaluation of protocols with each other depends on the module used.", "title": "" }, { "docid": "342e7faa2f5b71b9bde287f05f6118c7", "text": "Skyline queries have wide-ranging applications in fields that involve multi-criteria decision making, including tourism, retail industry, and human resources. By automatically removing incompetent candidates, skyline queries allow users to focus on a subset of superior data items (i.e., the skyline), thus reducing the decision-making overhead. However, users are still required to interpret and compare these superior items manually before making a successful choice. This task is challenging because of two issues. First, people usually have fuzzy, unstable, and inconsistent preferences when presented with multiple candidates. Second, skyline queries do not reveal the reasons for the superiority of certain skyline points in a multi-dimensional space. To address these issues, we propose SkyLens, a visual analytic system aiming at revealing the superiority of skyline points from different perspectives and at different scales to aid users in their decision making. Two scenarios demonstrate the usefulness of SkyLens on two datasets with a dozen of attributes. A qualitative study is also conducted to show that users can efficiently accomplish skyline understanding and comparison tasks with SkyLens.", "title": "" }, { "docid": "6a3bb84e7b8486692611aaa790609099", "text": "As ubiquitous commerce using IT convergence technologies is coming, it is important for the strategy of cosmetic sales to investigate the sensibility and the degree of preference in the environment for which the makeup style has changed focusing on being consumer centric. The users caused the diversification of the facial makeup styles, because they seek makeup and individuality to satisfy their needs. In this paper, we proposed the effect of the facial makeup style recommendation on visual sensibility. Development of the facial makeup style recommendation system used a user interface, sensibility analysis, weather forecast, and collaborative filtering for the facial makeup styles to satisfy the user’s needs in the cosmetic industry. Collaborative filtering was adopted to recommend facial makeup style of interest for users based on the predictive relationship discovered between the current user and other previous users. We used makeup styles in the survey questionnaire. The pictures of makeup style details, such as foundation, color lens, eye shadow, blusher, eyelash, lipstick, hairstyle, hairpin, necklace, earring, and hair length were evaluated in terms of sensibility. The data were analyzed by SPSS using ANOVA and factor analysis to discover the most effective types of details from the consumer’s sensibility viewpoint. Sensibility was composed of three concepts: contemporary, mature, and individual. The details of facial makeup styles were positioned in 3D-concept space to relate each type of detail to the makeup concept regarding a woman’s cosmetics. Ultimately, this paper suggests empirical applications to verify the adequacy and the validity of this system.", "title": "" }, { "docid": "b99207292a098761d1bb5cc220cf0790", "text": "Many researchers have attempted to predict the Enron corporate hierarchy from the data. This work, however, has been hampered by a lack of data. We present a new, large, and freely available gold-standard hierarchy. Using our new gold standard, we show that a simple lower bound for social network-based systems outperforms an upper bound on the approach taken by current NLP systems.", "title": "" }, { "docid": "efe8cf69a4666151603393032af22d8a", "text": "In this paper we present and discuss the findings of a study that investigated how people manage their collections of digital photographs. The six-month, 13-participant study included interviews, questionnaires, and analysis of usage statistics gathered from an instrumented digital photograph management tool called Shoebox. Alongside simple browsing features such as folders, thumbnails and timelines, Shoebox has some advanced multimedia features: content-based image retrieval and speech recognition applied to voice annotations. Our results suggest that participants found their digital photos much easier to manage than their non-digital ones, but that this advantage was almost entirely due to the simple browsing features. The advanced features were not used very often and their perceived utility was low. These results should help to inform the design of improved tools for managing personal digital photographs.", "title": "" }, { "docid": "b70f852bb89e67decf07554a02ee977a", "text": "The advances in information technology have witnessed great progress on healthcare technologies in various domains nowadays. However, these new technologies have also made healthcare data not only much bigger but also much more difficult to handle and process. Moreover, because the data are created from a variety of devices within a short time span, the characteristics of these data are that they are stored in different formats and created quickly, which can, to a large extent, be regarded as a big data problem. To provide a more convenient service and environment of healthcare, this paper proposes a cyber-physical system for patient-centric healthcare applications and services, called Health-CPS, built on cloud and big data analytics technologies. This system consists of a data collection layer with a unified standard, a data management layer for distributed storage and parallel computing, and a data-oriented service layer. The results of this study show that the technologies of cloud and big data can be used to enhance the performance of the healthcare system so that humans can then enjoy various smart healthcare applications and services.", "title": "" }, { "docid": "42440fb81f45c470d591c3bc57e7875b", "text": "We develop a framework to incorporate unlabeled data in the Error-Correcting Output Coding (ECOC) setup by decomposing multiclass problems into multiple binary problems and then use Co-Training to learn the individual binary classification problems. We show that our method is especially useful for classification tasks involving a large number of categories where Co-training doesn’t perform very well by itself and when combined with ECOC, outperforms several other algorithms that combine labeled and unlabeled data for text classification in terms of accuracy, precision-recall tradeoff, and efficiency.", "title": "" }, { "docid": "61d506905286fc3297622d1ac39534f0", "text": "In this paper we present the setup of an extensive Wizard-of-Oz environment used for the data collection and the development of a dialogue system. The envisioned Perception and Interaction Assistant will act as an independent dialogue partner. Passively observing the dialogue between the two human users with respect to a limited domain, the system should take the initiative and get meaningfully involved in the communication process when required by the conversational situation. The data collection described here involves audio and video data. We aim at building a rich multi-media data corpus to be used as a basis for our research which includes, inter alia, speech and gaze direction recognition, dialogue modelling and proactivity of the system. We further aspire to obtain data with emotional content to perfom research on emotion recognition, psychopysiological and usability analysis.", "title": "" }, { "docid": "8a128a099087c3dee5bbca7b2a8d8dc4", "text": "A large class of computational problems involve the determination of properties of graphs, digraphs, integers, arrays of integers, finite families of finite sets, boolean formulas and elements of other countable domains. Through simple encodings from such domains into the set of words over a finite alphabet these problems can be converted into language recognition problems, and we can inquire into their computational complexity. It is reasonable to consider such a problem satisfactorily solved when an algorithm for its solution is found which terminates within a number of steps bounded by a polynomial in the length of the input. We show that a large number of classic unsolved problems of covering, matching, packing, routing, assignment and sequencing are equivalent, in the sense that either each of them possesses a polynomial-bounded algorithm or none of them does.", "title": "" }, { "docid": "c0b22c68ee02c2adffa7fa9cdfd15812", "text": "In this paper the design issues of input electromagnetic interference (EMI) filters for inverter-fed motor drives including motor Common Mode (CM) voltage active compensation are studied. A coordinated design of motor CM-voltage active compensator and input EMI filter allows the drive system to comply with EMC standards and to yield an increased reliability at the same time. Two CM input EMI filters are built and compared. They are, designed, respectively, according to the conventional design procedure and considering the actual impedance mismatching between EMI source and receiver. In both design procedures, the presence of the active compensator is taken into account. The experimental evaluation of both filters' performance is given in terms of compliance of the system to standard limits.", "title": "" }, { "docid": "ebb941fe8b0807a4dcfe02ff898cf99f", "text": "Using “Analyze Results” at the Web of Science, one can directly generate overlays onto global journal maps of science. The maps are based on the 10,000+ journals contained in the Journal Citation Reports (JCR) of the Science and Social Science Citation Indices (2011). The disciplinary diversity of the retrieval is measured in terms of Rao-Stirling’s “quadratic entropy.” Since this indicator of interdisciplinarity is normalized between zero and one, the interdisciplinarity can be compared among document sets and across years, cited or citing. The colors used for the overlays are based on Blondel et al.’s (2008) community-finding algorithms operating on the relations journals included in JCRs. The results can be exported from VOSViewer with different options such as proportional labels, heat maps, or cluster density maps. The maps can also be web-started and/or animated (e.g., using PowerPoint). The “citing” dimension of the aggregated journal-journal citation matrix was found to provide a more comprehensive description than the matrix based on the cited archive. The relations between local and global maps and their different functions in studying the sciences in terms of journal literatures are further discussed: local and global maps are based on different assumptions and can be expected to serve different purposes for the explanation.", "title": "" }, { "docid": "ed7826f37cf45f56ba6e7abf98c509e7", "text": "The progressive ability of a six-strains L. monocytogenes cocktail to form biofilm on stainless steel (SS), under fish-processing simulated conditions, was investigated, together with the biocide tolerance of the developed sessile communities. To do this, the pathogenic bacteria were left to form biofilms on SS coupons incubated at 15°C, for up to 240h, in periodically renewable model fish juice substrate, prepared by aquatic extraction of sea bream flesh, under both mono-species and mixed-culture conditions. In the latter case, L. monocytogenes cells were left to produce biofilms together with either a five-strains cocktail of four Pseudomonas species (fragi, savastanoi, putida and fluorescens), or whole fish indigenous microflora. The biofilm populations of L. monocytogenes, Pseudomonas spp., Enterobacteriaceae, H2S producing and aerobic plate count (APC) bacteria, both before and after disinfection, were enumerated by selective agar plating, following their removal from surfaces through bead vortexing. Scanning electron microscopy was also applied to monitor biofilm formation dynamics and anti-biofilm biocidal actions. Results revealed the clear dominance of Pseudomonas spp. bacteria in all the mixed-culture sessile communities throughout the whole incubation period, with the in parallel sole presence of L. monocytogenes cells to further increase (ca. 10-fold) their sessile growth. With respect to L. monocytogenes and under mono-species conditions, its maximum biofilm population (ca. 6logCFU/cm2) was reached at 192h of incubation, whereas when solely Pseudomonas spp. cells were also present, its biofilm formation was either slightly hindered or favored, depending on the incubation day. However, when all the fish indigenous microflora was present, biofilm formation by the pathogen was greatly hampered and never exceeded 3logCFU/cm2, while under the same conditions, APC biofilm counts had already surpassed 7logCFU/cm2 by the end of the first 96h of incubation. All here tested disinfection treatments, composed of two common food industry biocides gradually applied for 15 to 30min, were insufficient against L. monocytogenes mono-species biofilm communities, with the resistance of the latter to significantly increase from the 3rd to 7th day of incubation. However, all these treatments resulted in no detectable L. monocytogenes cells upon their application against the mixed-culture sessile communities also containing the fish indigenous microflora, something probably associated with the low attached population level of these pathogenic cells before disinfection (<102CFU/cm2) under such mixed-culture conditions. Taken together, all these results expand our knowledge on both the population dynamics and resistance of L. monocytogenes biofilm cells under conditions resembling those encountered within the seafood industry and should be considered upon designing and applying effective anti-biofilm strategies.", "title": "" }, { "docid": "16488fc65794a318e06777189edc3e4b", "text": "This work details Sighthoundś fully automated license plate detection and recognition system. The core technology of the system is built using a sequence of deep Convolutional Neural Networks (CNNs) interlaced with accurate and efficient algorithms. The CNNs are trained and fine-tuned so that they are robust under different conditions (e.g. variations in pose, lighting, occlusion, etc.) and can work across a variety of license plate templates (e.g. sizes, backgrounds, fonts, etc). For quantitative analysis, we show that our system outperforms the leading license plate detection and recognition technology i.e. ALPR on several benchmarks. Our system is available to developers through the Sighthound Cloud API at https://www.sighthound.com/products/cloud", "title": "" } ]
scidocsrr
70b799ee929463682762f21d422f7b3a
Low-Rank Similarity Metric Learning in High Dimensions
[ { "docid": "2d34d9e9c33626727734766a9951a161", "text": "In this paper, we propose and study the use of alternating direction algorithms for several `1-norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived from either the primal or the dual forms of the `1-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an `1-problem into one having partially separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms can be regarded as first-order primal-dual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical results in comparison with several state-of-the-art algorithms are given to demonstrate that the proposed algorithms are efficient, stable and robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the `1-norm fidelity should be the fidelity of choice in compressive sensing.", "title": "" } ]
[ { "docid": "1b53b5c7741dad884ab94b3b8a3d8cfd", "text": "The impact of self-heating effect (SHE) on device reliability characterization, such as BTI, HCI, and TDDB, is extensively examined in this work. Self-heating effect and its impact on device level reliability mechanisms is carefully studied, and an empirical model for layout dependent SHE is established. Since the recovery effect during NBTI characterization is found sensitive to self-heating, either changing VT shift as index or adopting μs-delay measurement system is proposed to get rid of SHE influence. In common HCI stress condition, the high drain stress bias usually leads to high power or self-heating, which may dramatically under-estimate the lifetime extracted. The stress condition Vg = 0.6~0.8Vd is suggested to meet the reasonable operation power and self-heating induced temperature rising. Similarly, drain-bias dependent TDDB characteristics are also under-estimated due to the existence of SHE and need careful calibration to project the lifetime at common usage bias.", "title": "" }, { "docid": "90c46b6e7f125481e966b746c5c76c97", "text": "Black-box mutational fuzzing is a simple yet effective technique to find bugs in software. Given a set of program-seed pairs, we ask how to schedule the fuzzings of these pairs in order to maximize the number of unique bugs found at any point in time. We develop an analytic framework using a mathematical model of black-box mutational fuzzing and use it to evaluate 26 existing and new randomized online scheduling algorithms. Our experiments show that one of our new scheduling algorithms outperforms the multi-armed bandit algorithm in the current version of the CERT Basic Fuzzing Framework (BFF) by finding 1.5x more unique bugs in the same amount of time.", "title": "" }, { "docid": "5f6e77c95d92c1b8f571921954f252d6", "text": "Parallel job scheduling has gained increasing recognition in recent years as a distinct area of study. However , there is concern about the divergence of theory and practice in the eld. We review theoretical research in this area, and recommendations based on recent results. This is contrasted with a proposal for standard interfaces among the components of a scheduling system, that has grown from requirements in the eld.", "title": "" }, { "docid": "12f717b4973a5290233d6f03ba05626b", "text": "We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.", "title": "" }, { "docid": "a7bd7a5b7d79ce8c5691abfdcecfeec7", "text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.", "title": "" }, { "docid": "82592f60e0039089e3c16d9534780ad5", "text": "A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator \"contrast intensifier\" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and max-min rule over the neighbors of a pixel. The reduction of the \"index of fuzziness\" and \"entropy\" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.", "title": "" }, { "docid": "5a8f8b9094c62b77d9f71cf5b2a3a562", "text": "Recent experiments have established that information can be encoded in the spike times of neurons relative to the phase of a background oscillation in the local field potential-a phenomenon referred to as \"phase-of-firing coding\" (PoFC). These firing phase preferences could result from combining an oscillation in the input current with a stimulus-dependent static component that would produce the variations in preferred phase, but it remains unclear whether these phases are an epiphenomenon or really affect neuronal interactions-only then could they have a functional role. Here we show that PoFC has a major impact on downstream learning and decoding with the now well established spike timing-dependent plasticity (STDP). To be precise, we demonstrate with simulations how a single neuron equipped with STDP robustly detects a pattern of input currents automatically encoded in the phases of a subset of its afferents, and repeating at random intervals. Remarkably, learning is possible even when only a small fraction of the afferents ( approximately 10%) exhibits PoFC. The ability of STDP to detect repeating patterns had been noted before in continuous activity, but it turns out that oscillations greatly facilitate learning. A benchmark with more conventional rate-based codes demonstrates the superiority of oscillations and PoFC for both STDP-based learning and the speed of decoding: the oscillation partially formats the input spike times, so that they mainly depend on the current input currents, and can be efficiently learned by STDP and then recognized in just one oscillation cycle. This suggests a major functional role for oscillatory brain activity that has been widely reported experimentally.", "title": "" }, { "docid": "abe5bdf6a17cf05b49ac578347a3ca5d", "text": "To realize the broad vision of pervasive computing, underpinned by the “Internet of Things” (IoT), it is essential to break down application and technology-based silos and support broad connectivity and data sharing; the cloud being a natural enabler. Work in IoT tends toward the subsystem, often focusing on particular technical concerns or application domains, before offloading data to the cloud. As such, there has been little regard given to the security, privacy, and personal safety risks that arise beyond these subsystems; i.e., from the wide-scale, cross-platform openness that cloud services bring to IoT. In this paper, we focus on security considerations for IoT from the perspectives of cloud tenants, end-users, and cloud providers, in the context of wide-scale IoT proliferation, working across the range of IoT technologies (be they things or entire IoT subsystems). Our contribution is to analyze the current state of cloud-supported IoT to make explicit the security considerations that require further work.", "title": "" }, { "docid": "77371cfa61dbb3053f3106f5433d23a7", "text": "We present a new noniterative approach to synthetic aperture radar (SAR) autofocus, termed the multichannel autofocus (MCA) algorithm. The key in the approach is to exploit the multichannel redundancy of the defocusing operation to create a linear subspace, where the unknown perfectly focused image resides, expressed in terms of a known basis formed from the given defocused image. A unique solution for the perfectly focused image is then directly determined through a linear algebraic formulation by invoking an additional image support condition. The MCA approach is found to be computationally efficient and robust and does not require prior assumptions about the SAR scene used in existing methods. In addition, the vector-space formulation of MCA allows sharpness metric optimization to be easily incorporated within the restoration framework as a regularization term. We present experimental results characterizing the performance of MCA in comparison with conventional autofocus methods and discuss the practical implementation of the technique.", "title": "" }, { "docid": "7209596ad58da21211bfe0ceaaccc72b", "text": "Knowledge tracing (KT)[1] has been used in various forms for adaptive computerized instruction for more than 40 years. However, despite its long history of application, it is difficult to use in domain model search procedures, has not been used to capture learning where multiple skills are needed to perform a single action, and has not been used to compute latencies of actions. On the other hand, existing models used for educational data mining (e.g. Learning Factors Analysis (LFA)[2]) and model search do not tend to allow the creation of a “model overlay” that traces predictions for individual students with individual skills so as to allow the adaptive instruction to automatically remediate performance. Because these limitations make the transition from model search to model application in adaptive instruction more difficult, this paper describes our work to modify an existing data mining model so that it can also be used to select practice adaptively. We compare this new adaptive data mining model (PFA, Performance Factors Analysis) with two versions of LFA and then compare PFA with standard KT.", "title": "" }, { "docid": "00c5432a69225bd7a7dbd41f88a1f391", "text": "I The viewpoint of the subject of matroids, and related areas of lattice theory, has always been, in one way or another, abstraction of algebraic dependence or, equivalently, abstraction of the incidence relations in geometric representations of algebra. Often one of the main derived facts is that all bases have the same cardinality. (See Van der Waerden, Section 33.) From the viewpoint of mathematical programming, the equal cardinality of all bases has special meaning — namely, that every basis is an optimum-cardinality basis. We are thus prompted to study this simple property in the context of linear programming. It turns out to be useful to regard \" pure matroid theory \" , which is only incidentally related to the aspects of algebra which it abstracts, as the study of certain classes of convex polyhedra. (1) A matroid M = (E, F) can be defined as a finite set E and a nonempty family F of so-called independent subsets of E such that (a) Every subset of an independent set is independent, and (b) For every A ⊆ E, every maximal independent subset of A, i.e., every basis of A, has the same cardinality, called the rank, r(A), of A (with respect to M). (This definition is not standard. It is prompted by the present interest).", "title": "" }, { "docid": "932ed2eb35ccf0055a49da12e2d0edfc", "text": "An intelligent manhole cover management system (IMCS) is one of the most important basic platforms in a smart city to prevent frequent manhole cover accidents. Manhole cover displacement, loss, and damage pose threats to personal safety, which is contrary to the aim of smart cities. This paper proposes an edge computing-based IMCS for smart cities. A unique radio frequency identification tag with tilt and vibration sensors is used for each manhole cover, and a Narrowband Internet of Things is adopted for communication. Meanwhile, edge computing servers interact with corresponding management personnel through mobile devices based on the collected information. A demonstration application of the proposed IMCS in the Xiasha District of Hangzhou, China, showed its high efficiency. It efficiently reduced the average repair time, which could improve the security for both people and manhole covers.", "title": "" }, { "docid": "5c3ae59522d549bed4c059a11b9724c6", "text": "The chemokine receptor CCR7 drives leukocyte migration into and within lymph nodes (LNs). It is activated by chemokines CCL19 and CCL21, which are scavenged by the atypical chemokine receptor ACKR4. CCR7-dependent navigation is determined by the distribution of extracellular CCL19 and CCL21, which form concentration gradients at specific microanatomical locations. The mechanisms underpinning the establishment and regulation of these gradients are poorly understood. In this article, we have incorporated multiple biochemical processes describing the CCL19-CCL21-CCR7-ACKR4 network into our model of LN fluid flow to establish a computational model to investigate intranodal chemokine gradients. Importantly, the model recapitulates CCL21 gradients observed experimentally in B cell follicles and interfollicular regions, building confidence in its ability to accurately predict intranodal chemokine distribution. Parameter variation analysis indicates that the directionality of these gradients is robust, but their magnitude is sensitive to these key parameters: chemokine production, diffusivity, matrix binding site availability, and CCR7 abundance. The model indicates that lymph flow shapes intranodal CCL21 gradients, and that CCL19 is functionally important at the boundary between B cell follicles and the T cell area. It also predicts that ACKR4 in LNs prevents CCL19/CCL21 accumulation in efferent lymph, but does not control intranodal gradients. Instead, it attributes the disrupted interfollicular CCL21 gradients observed in Ackr4-deficient LNs to ACKR4 loss upstream. Our novel approach has therefore generated new testable hypotheses and alternative interpretations of experimental data. Moreover, it acts as a framework to investigate gradients at other locations, including those that cannot be visualized experimentally or involve other chemokines.", "title": "" }, { "docid": "98b536786ecfeab870467c5951924662", "text": "An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentieth-century scientific movements. The nonlinear, nonstationary, and nonlocal nature of behavioral and brain data are emphasized. Three sources of contemporary neural network research—the binary, linear, and continuous-nonlinear models—are noted. The remainder of the article describes results about continuous-nonlinear models: Many models of contentaddressable memory are shown to be special cases of the Cohen-Grossberg model and global Liapunov function, including the additive, brain-state-in-a-box, McCulloch-Pitts, Boltzmann machine, Hartline-Ratliff-Miller, shunting, masking field, bidirectional associative memory, Volterra-Lotka, Gilpin-Ayala, and Eigen-Schuster models. A Liapunov functional method is described for proving global limit or oscillation theorems Purchase Export", "title": "" }, { "docid": "b93825ddae40f61a27435bb255a3cc2e", "text": "Visual programming arguably provides greater benefit in explicit parallel programming, particularly coarse grain MIMD programming, than in sequential programming. Explicitly parallel programs are multi-dimensional objects; the natural representations of a parallel program are annotated directed graphs: data flow graphs, control flow graphs, etc. where the nodes of the graphs are sequential computations. The execution of parallel programs is a directed graph of instances of sequential computations. A visually based (directed graph) representation of parallel programs is thus more natural than a pure text string language where multi-dimensional structures must be implicitly defined. The naturalness of the annotated directed graph representation of parallel programs enables methods for programming and debugging which are qualitatively different and arguably superior to the conventional practice based on pure text string languages. Annotation of the graphs is a critical element of a practical visual programming system; text is still the best way to represent many aspects of programs. This paper presents a model of parallel programming and a model of execution for parallel programs which are the conceptual framework for a complete visual programming environment including capture of parallel structure, compilation and behavior analysis (performance and debugging). Two visually-oriented parallel programming systems, CODE 2.0 and HeNCE, each based on a variant of the model of programming, will be used to illustrate the concepts. The benefits of visually-oriented realizations of these models for program structure capture, software component reuse, performance analysis and debugging will be explored and hopefully demonstrated by examples in these representations. It is only by actually implementing and using visual parallel programming languages that we have been able to fully evaluate their merits.", "title": "" }, { "docid": "4b96679173c825db7bc334449b6c4b83", "text": "This article provides the first survey of computational models of emotion in reinforcement learning (RL) agents. The survey focuses on agent/robot emotions, and mostly ignores human user emotions. Emotions are recognized as functional in decision-making by influencing motivation and action selection. Therefore, computational emotion models are usually grounded in the agent’s decision making architecture, of which RL is an important subclass. Studying emotions in RL-based agents is useful for three research fields. For machine learning (ML) researchers, emotion models may improve learning efficiency. For the interactive ML and human–robot interaction community, emotions can communicate state and enhance user investment. Lastly, it allows affective modelling researchers to investigate their emotion theories in a successful AI agent class. This survey provides background on emotion theory and RL. It systematically addresses (1) from what underlying dimensions (e.g. homeostasis, appraisal) emotions can be derived and how these can be modelled in RL-agents, (2) what types of emotions have been derived from these dimensions, and (3) how these emotions may either influence the learning efficiency of the agent or be useful as social signals. We also systematically compare evaluation criteria, and draw connections to important RL sub-domains like (intrinsic) motivation and model-based RL. In short, this survey provides both a practical overview for engineers wanting to implement emotions in their RL agents, and identifies challenges and directions for future emotion-RL research.", "title": "" }, { "docid": "87ebf3c29afc0ea6b8c386f8f5ba31f9", "text": "In this study, we present a weakly supervised approach that discovers the discriminative structures of sketch images, given pairs of sketch images and web images. In contrast to traditional approaches that use global appearance features or relay on keypoint features, our aim is to automatically learn the shared latent structures that exist between sketch images and real images, even when there are significant appearance differences across its relevant real images. To accomplish this, we propose a deep convolutional neural network, named SketchNet. We firstly develop a triplet composed of sketch, positive and negative real image as the input of our neural network. To discover the coherent visual structures between the sketch and its positive pairs, we introduce the softmax as the loss function. Then a ranking mechanism is introduced to make the positive pairs obtain a higher score comparing over negative ones to achieve robust representation. Finally, we formalize above-mentioned constrains into the unified objective function, and create an ensemble feature representation to describe the sketch images. Experiments on the TUBerlin sketch benchmark demonstrate the effectiveness of our model and show that deep feature representation brings substantial improvements over other state-of-the-art methods on sketch classification.", "title": "" }, { "docid": "254fab8fa998333a9c1f261a620c4b23", "text": "Pathological self-mutilation has been prevalent throughout history and in many cultures. Major self mutilations autocastration, eye enucleation and limb amputation are rarer than minor self-mutilations like wrist cutting, head banging etc. Because of their gruesome nature, major self-mutilations invoke significant negative emotions among therapists and caregivers. Unfortunately, till date, there is very little research in this field. In the absence of robust neurobiological understanding and speculative psychodynamic theories, the current understanding is far from satisfactory. At the same time, the role of culture and society cannot be completely ignored while understanding major self-mutilations. Literature from western culture describes this as an act of repentance towards past bad thoughts or acts in contrast to the traditional eastern culture that praises it as an act of sacrifice for achieving superiority and higher goals in the society. The authors present here two cases of major self-mutilation i.e. autocastration and autoenucleation both of which occurred in patients suffering from schizophrenia. They have also reviewed the existing literature and current understanding of this phenomenon (German J Psychiatry 2010; 13 (4): 164-170).", "title": "" }, { "docid": "095f8d5c3191d6b70b2647b562887aeb", "text": "Hardware specialization, in the form of datapath and control circuitry customized to particular algorithms or applications, promises impressive performance and energy advantages compared to traditional architectures. Current research in accelerators relies on RTL-based synthesis flows to produce accurate timing, power, and area estimates. Such techniques not only require significant effort and expertise but also are slow and tedious to use, making large design space exploration infeasible. To overcome this problem, the authors developed Aladdin, a pre-RTL, power-performance accelerator modeling framework and demonstrated its application to system-on-chip (SoC) simulation. Aladdin estimates performance, power, and area of accelerators within 0.9, 4.9, and 6.6 percent with respect to RTL implementations. Integrated with architecture-level general-purpose core and memory hierarchy simulators, Aladdin provides researchers with a fast but accurate way to model the power and performance of accelerators in an SoC environment.", "title": "" }, { "docid": "9292601d14f70925920d3b2ab06a39ce", "text": "Internet review sites allow consumers to write detailed reviews of products potentially containing information related to user experience (UX) and usability. Using 5198 sentences from 3492 online reviews of software and video games, we investigate the content of online reviews with the aims of (i) charting the distribution of information in reviews among different dimensions of usability and UX, and (ii) extracting an associated vocabulary for each dimension using techniques from natural language processing and machine learning. We (a) find that 13%-49% of sentences in our online reviews pool contain usability or UX information; (b) chart the distribution of four sets of dimensions of usability and UX across reviews from two product categories; (c) extract a catalogue of important word stems for a number of dimensions. Our results suggest that a greater understanding of users' preoccupation with different dimensions of usability and UX may be inferred from the large volume of self-reported experiences online, and that research focused on identifying pertinent dimensions of usability and UX may benefit further from empirical studies of user-generated experience reports.", "title": "" } ]
scidocsrr
b133d39f93f87b3f8c051ba53b9acd2a
Playing games for security: an efficient exact algorithm for solving Bayesian Stackelberg games
[ { "docid": "4cc4c8fd07f30b5546be2376c1767c19", "text": "We apply new bilevel and trilevel optimization models to make critical infrastructure more resilient against terrorist attacks. Each model features an intelligent attacker (terrorists) and a defender (us), information transparency, and sequential actions by attacker and defender. We illustrate with examples of the US Strategic Petroleum Reserve, the US Border Patrol at Yuma, Arizona, and an electrical transmission system. We conclude by reporting insights gained from the modeling experience and many “red-team” exercises. Each exercise gathers open-source data on a real-world infrastructure system, develops an appropriate bilevel or trilevel model, and uses these to identify vulnerabilities in the system or to plan an optimal defense.", "title": "" } ]
[ { "docid": "cc85e917ca668a60461ba6848e4c3b42", "text": "In this paper a generic method for fault detection and isolation (FDI) in manufacturing systems considered as discrete event systems (DES) is presented. The method uses an identified model of the closed loop of plant and controller built on the basis of observed fault free system behavior. An identification algorithm known from literature is used to determine the fault detection model in form of a non-deterministic automaton. New results of how to parameterize this algorithm are reported. To assess the fault detection capability of an identified automaton, probabilistic measures are proposed. For fault isolation, the concept of residuals adapted for DES is used by defining appropriate set operations representing generic fault symptoms. The method is applied to a case study system.", "title": "" }, { "docid": "8cfa2086e1c73bae6945d1a19d52be26", "text": "We present a unified dynamics framework for real-time visual effects. Using particles connected by constraints as our fundamental building block allows us to treat contact and collisions in a unified manner, and we show how this representation is flexible enough to model gases, liquids, deformable solids, rigid bodies and cloth with two-way interactions. We address some common problems with traditional particle-based methods and describe a parallel constraint solver based on position-based dynamics that is efficient enough for real-time applications.", "title": "" }, { "docid": "90b0ee9cf92c3ff905c2dffda9e3e509", "text": "Julius is an open-source large-vocabulary speech recognition software used for both academic research and industrial applications. It executes real-time speech recognition of a 60k-word dictation task on low-spec PCs with small footprint, and even on embedded devices. Julius supports standard language models such as statistical N-gram model and rule-based grammars, as well as Hidden Markov Model (HMM) as an acoustic model. One can build a speech recognition system of his own purpose, or can integrate the speech recognition capability to a variety of applications using Julius. This article describes an overview of Julius, major features and specifications, and summarizes the developments conducted in the recent years.", "title": "" }, { "docid": "fc94c6fb38198c726ab3b417c3fe9b44", "text": "Tremor is a rhythmical and involuntary oscillatory movement of a body part and it is one of the most common movement disorders. Orthotic devices have been under investigation as a noninvasive tremor suppression alternative to medication or surgery. The challenge in musculoskeletal tremor suppression is estimating and attenuating the tremor motion without impeding the patient's intentional motion. In this research a robust tremor suppression algorithm was derived for patients with pathological tremor in the upper limbs. First the motion in the tremor frequency range is estimated using a high-pass filter. Then, by applying the backstepping method the appropriate amount of torque is calculated to drive the output of the estimator toward zero. This is equivalent to an estimation of the tremor torque. It is shown that the arm/orthotic device control system is stable and the algorithm is robust despite inherent uncertainties in the open-loop human arm joint model. A human arm joint simulator, capable of emulating tremorous motion of a human arm joint was used to evaluate the proposed suppression algorithm experimentally for two types of tremor, Parkinson and essential. Experimental results show 30-42 dB (97.5-99.2%) suppression of tremor with minimal effect on the intentional motion.", "title": "" }, { "docid": "e8db06439dc533e0dd24e0920feb70c9", "text": "Today, vehicles are increasingly being connected to the Internet of Things which enable them to provide ubiquitous access to information to drivers and passengers while on the move. However, as the number of connected vehicles keeps increasing, new requirements (such as seamless, secure, robust, scalable information exchange among vehicles, humans, and roadside infrastructures) of vehicular networks are emerging. In this context, the original concept of vehicular ad-hoc networks is being transformed into a new concept called the Internet of Vehicles (IoV). We discuss the benefits of IoV along with recent industry standards developed to promote its implementation. We further present recently proposed communication protocols to enable the seamless integration and operation of the IoV. Finally, we present future research directions of IoV that require further consideration from the vehicular research community.", "title": "" }, { "docid": "ccb6067614bebf844d96e9a337a4c0d4", "text": "BACKGROUND\nJoint pain is thought to be an early sign of injury to a pitcher.\n\n\nOBJECTIVE\nTo evaluate the association between pitch counts, pitch types, and pitching mechanics and shoulder and elbow pain in young pitchers.\n\n\nSTUDY DESIGN\nProspective cohort study.\n\n\nMETHODS\nFour hundred and seventy-six young (ages 9 to 14 years) baseball pitchers were followed for one season. Data were collected from pre- and postseason questionnaires, injury and performance interviews after each game, pitch count logs, and video analysis of pitching mechanics. Generalized estimating equations and logistic regression analysis were used.\n\n\nRESULTS\nHalf of the subjects experienced elbow or shoulder pain during the season. The curveball was associated with a 52% increased risk of shoulder pain and the slider was associated with an 86% increased risk of elbow pain. There was a significant association between the number of pitches thrown in a game and during the season and the rate of elbow pain and shoulder pain.\n\n\nCONCLUSIONS\nPitchers in this age group should be cautioned about throwing breaking pitches (curveballs and sliders) because of the increased risk of elbow and shoulder pain. Limitations on pitches thrown in a game and in a season can also reduce the risk of pain. Further evaluation of pain and pitching mechanics is necessary.", "title": "" }, { "docid": "e4574b1e8241599b5c3ef740b461efba", "text": "Increasing awareness of ICS security issues has brought about a growing body of work in this area, including pioneering contributions based on realistic control system logs and network traces. This paper surveys the state of the art in ICS security research, including efforts of industrial researchers, highlighting the most interesting works. Research efforts are grouped into divergent areas, where we add “secure control” as a new category to capture security goals specific to control systems that differ from security goals in traditional IT systems.", "title": "" }, { "docid": "329420b8b13e8c315d341e382419315a", "text": "The aim of this research is to design an intelligent system that addresses the problem of real-time localization and navigation of visually impaired (VI) in an indoor environment using a monocular camera. Systems that have been developed so far for the VI use either many cameras (stereo and monocular) integrated with other sensors or use very complex algorithms that are computationally expensive. In this research work, a computationally less expensive integrated system has been proposed to combine imaging geometry, Visual Odometry (VO), Object Detection (OD) along with Distance-Depth (D-D) estimation algorithms for precise navigation and localization by utilizing a single monocular camera as the only sensor. The developed algorithm is tested for both standard Karlsruhe and indoor environment recorded datasets. Tests have been carried out in real-time using a smartphone camera that captures image data of the environment as the person moves and is sent over Wi-Fi for further processing to the MATLAB software model running on an Intel i7 processor. The algorithm provides accurate results on real-time navigation in the environment with an audio feedback about the person's location. The trajectory of the navigation is expressed in an arbitrary scale. Object detection based localization is accurate. The D-D estimation provides distance and depth measurements up to an accuracy of 94–98%.", "title": "" }, { "docid": "1e5073e73c371f1682d95bb3eedaf7f4", "text": "Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children-an ability that the current robot-assisted ASD intervention systems lack-to achieve effective interaction that addresses the role of affective states in human-robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing ldquounderstandingrdquo robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his/her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1% accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.", "title": "" }, { "docid": "40099678d2c97013eb986d3be93eefb4", "text": "Mortality prediction of intensive care unit (ICU) patients facilitates hospital benchmarking and has the opportunity to provide caregivers with useful summaries of patient health at the bedside. The development of novel models for mortality prediction is a popular task in machine learning, with researchers typically seeking to maximize measures such as the area under the receiver operator characteristic curve (AUROC). The number of ’researcher degrees of freedom’ that contribute to the performance of a model, however, presents a challenge when seeking to compare reported performance of such models. In this study, we review publications that have reported performance of mortality prediction models based on the Medical Information Mart for Intensive Care (MIMIC) database and attempt to reproduce the cohorts used in their studies. We then compare the performance reported in the studies against gradient boosting and logistic regression models using a simple set of features extracted from MIMIC. We demonstrate the large heterogeneity in studies that purport to conduct the single task of ’mortality prediction’, highlighting the need for improvements in the way that prediction tasks are reported to enable fairer comparison between models. We reproduced datasets for 38 experiments corresponding to 28 published studies using MIMIC. In half of the experiments, the sample size we acquired was 25% greater or smaller than the sample size reported. The highest discrepancy was 11,767 patients. While accurate reproduction of each study cannot be guaranteed, we believe that these results highlight the need for more consistent reporting of model design and methodology to allow performance improvements to be compared. We discuss the challenges in reproducing the cohorts used in the studies, highlighting the importance of clearly reported methods (e.g. data cleansing, variable selection, cohort selection) and the need for open code and publicly available benchmarks.", "title": "" }, { "docid": "f3c1ad1431d3aced0175dbd6e3455f39", "text": "BACKGROUND\nMethylxanthine therapy is commonly used for apnea of prematurity but in the absence of adequate data on its efficacy and safety. It is uncertain whether methylxanthines have long-term effects on neurodevelopment and growth.\n\n\nMETHODS\nWe randomly assigned 2006 infants with birth weights of 500 to 1250 g to receive either caffeine or placebo until therapy for apnea of prematurity was no longer needed. The primary outcome was a composite of death, cerebral palsy, cognitive delay (defined as a Mental Development Index score of <85 on the Bayley Scales of Infant Development), deafness, or blindness at a corrected age of 18 to 21 months.\n\n\nRESULTS\nOf the 937 infants assigned to caffeine for whom adequate data on the primary outcome were available, 377 (40.2%) died or survived with a neurodevelopmental disability, as compared with 431 of the 932 infants (46.2%) assigned to placebo for whom adequate data on the primary outcome were available (odds ratio adjusted for center, 0.77; 95% confidence interval [CI], 0.64 to 0.93; P=0.008). Treatment with caffeine as compared with placebo reduced the incidence of cerebral palsy (4.4% vs. 7.3%; adjusted odds ratio, 0.58; 95% CI, 0.39 to 0.87; P=0.009) and of cognitive delay (33.8% vs. 38.3%; adjusted odds ratio, 0.81; 95% CI, 0.66 to 0.99; P=0.04). The rates of death, deafness, and blindness and the mean percentiles for height, weight, and head circumference at follow-up did not differ significantly between the two groups.\n\n\nCONCLUSIONS\nCaffeine therapy for apnea of prematurity improves the rate of survival without neurodevelopmental disability at 18 to 21 months in infants with very low birth weight. (ClinicalTrials.gov number, NCT00182312 [ClinicalTrials.gov].).", "title": "" }, { "docid": "a0acd4870951412fa31bc7803f927413", "text": "Surprisingly little is understood about the physiologic and pathologic processes that involve intraoral sebaceous glands. Neoplasms are rare. Hyperplasia of these glands is undoubtedly more common, but criteria for the diagnosis of intraoral sebaceous hyperplasia have not been established. These lesions are too often misdiagnosed as large \"Fordyce granules\" or, when very large, as sebaceous adenomas. On the basis of a series of 31 nonneoplastic sebaceous lesions and on published data, the following definition is proposed: intraoral sebaceous hyperplasia occurs when a lesion, judged clinically to be a distinct abnormality that requires biopsy for diagnosis or confirmation of clinical impression, has histologic features of one or more well-differentiated sebaceous glands that exhibit no fewer than 15 lobules per gland. Sebaceous glands with fewer than 15 lobules that form an apparently distinct clinical lesion on the buccal mucosa are considered normal, whereas similar lesions of other intraoral sites are considered ectopic sebaceous glands. Sebaceous adenomas are less differentiated than sebaceous hyperplasia.", "title": "" }, { "docid": "23384db962a1eb524f40ca52f4852b14", "text": "Recent developments in Artificial Intelligence (AI) have generated a steep interest from media and general public. As AI systems (e.g. robots, chatbots, avatars and other intelligent agents) are moving from being perceived as a tool to being perceived as autonomous agents and team-mates, an important focus of research and development is understanding the ethical impact of these systems. What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated? These and many other related questions are currently the focus of much attention. The way society and our systems will be able to deal with these questions will for a large part determine our level of trust, and ultimately, the impact of AI in society, and the existence of AI. Contrary to the frightening images of a dystopic future in media and popular fiction, where AI systems dominate the world and is mostly concerned with warfare, AI is already changing our daily lives mostly in ways that improve human health, safety, and productivity (Stone et al. 2016). This is the case in domain such as transportation; service robots; health-care; education; public safety and security; and entertainment. Nevertheless, and in order to ensure that those dystopic futures do not become reality, these systems must be introduced in ways that build trust and understanding, and respect human and civil rights. The need for ethical considerations in the development of intelligent interactive systems is becoming one of the main influential areas of research in the last few years, and has led to several initiatives both from researchers as from practitioners, including the IEEE initiative on Ethics of Autonomous Systems1, the Foundation for Responsible Robotics2, and the Partnership on AI3 amongst several others. As the capabilities for autonomous decision making grow, perhaps the most important issue to consider is the need to rethink responsibility (Dignum 2017). Whatever their level of autonomy and social awareness and their ability to learn, AI systems are artefacts, constructed by people to fulfil some goals. Theories, methods, algorithms are needed to integrate societal, legal and moral values into technological developments in AI, at all stages of development (analysis, design, construction, deployment and evaluation). These frameworks must deal both with the autonomic reasoning of the machine about such issues that we consider to have ethical impact, but most importantly, we need frameworks to guide design choices, to regulate the reaches of AI systems, to ensure proper data stewardship, and to help individuals determine their own involvement. Values are dependent on the socio-cultural context (Turiel 2002), and are often only implicit in deliberation processes, which means that methodologies are needed to elicit the values held by all the stakeholders, and to make these explicit can lead to better understanding and trust on artificial autonomous systems. That is, AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. Responsible Artificial Intelligence is about human responsibility for the development of intelligent systems along fundamental human principles and values, to ensure human flourishing and wellbeing in a sustainable world. In fact, Responsible AI is more than the ticking of some ethical ‘boxes’ in a report, or the development of some add-on features, or switch-off buttons in AI systems. Rather, responsibility is fundamental", "title": "" }, { "docid": "3688c987419daade77c44912fbc72ecf", "text": "We propose a visual food recognition framework that integrates the inherent semantic relationships among fine-grained classes. Our method learns semantics-aware features by formulating a multi-task loss function on top of a convolutional neural network (CNN) architecture. It then refines the CNN predictions using a random walk based smoothing procedure, which further exploits the rich semantic information. We evaluate our algorithm on a large \"food-in-the-wild\" benchmark, as well as a challenging dataset of restaurant food dishes with very few training images. The proposed method achieves higher classification accuracy than a baseline which directly fine-tunes a deep learning network on the target dataset. Furthermore, we analyze the consistency of the learned model with the inherent semantic relationships among food categories. Results show that the proposed approach provides more semantically meaningful results than the baseline method, even in cases of mispredictions.", "title": "" }, { "docid": "cc93f5a421ad0e5510d027b01582e5ae", "text": "This paper assesses the impact of financial reforms in Zimbabwe on savings and credit availability to small and medium scale enterprises (SMEs) and the poor. We established that the reforms improved domestic savings mobilization due to high deposit rates, the emergence of new financial institutions and products and the general increase in real incomes after the 1990 economic reforms. The study uncovered that inflation and real income were the major determinants of savings during the sample period. High lending rates and the use of conventional lending methodologies by banks restricted access to credit by the SMEs and the poor. JEL Classification Numbers: E21, O16.", "title": "" }, { "docid": "0c177af9c2fffa6c4c667d1b4a4d3d79", "text": "In the last decade, a large number of different software component models have been developed, with different aims and using different principles and technologies. This has resulted in a number of models which have many similarities, but also principal differences, and in many cases unclear concepts. Component-based development has not succeeded in providing standard principles, as has, for example, object-oriented development. In order to increase the understanding of the concepts and to differentiate component models more easily, this paper identifies, discusses, and characterizes fundamental principles of component models and provides a Component Model Classification Framework based on these principles. Further, the paper classifies a large number of component models using this framework.", "title": "" }, { "docid": "f996b9911692cc835e55e561c3a501db", "text": "This study proposes a clustering-based Wi-Fi fingerprinting localization algorithm. The proposed algorithm first presents a novel support vector machine based clustering approach, namely SVM-C, which uses the margin between two canonical hyperplanes for classification instead of using the Euclidean distance between two centroids of reference locations. After creating the clusters of fingerprints by SVM-C, our positioning system embeds the classification mechanism into a positioning task and compensates for the large database searching problem. The proposed algorithm assigns the matched cluster surrounding the test sample and locates the user based on the corresponding cluster's fingerprints to reduce the computational complexity and remove estimation outliers. Experimental results from realistic Wi-Fi test-beds demonstrated that our approach apparently improves the positioning accuracy. As compared to three existing clustering-based methods, K-means, affinity propagation, and support vector clustering, the proposed algorithm reduces the mean localization errors by 25.34%, 25.21%, and 26.91%, respectively.", "title": "" }, { "docid": "a2fe18fde80d729b9142ad116dbf5ba3", "text": "We present a physically interpretable, continuous threedimensional (3D) model for handling occlusions with applications to road scene understanding. We probabilistically assign each point in space to an object with a theoretical modeling of the reflection and transmission probabilities for the corresponding camera ray. Our modeling is unified in handling occlusions across a variety of scenarios, such as associating structure from motion (SFM) point tracks with potentially occluding objects or modeling object detection scores in applications such as 3D localization. For point track association, our model uniformly handles static and dynamic objects, which is an advantage over motion segmentation approaches traditionally used in multibody SFM. Detailed experiments on the KITTI raw dataset show the superiority of the proposed method over both state-of-the-art motion segmentation and a baseline that heuristically uses detection bounding boxes for resolving occlusions. We also demonstrate how our continuous occlusion model may be applied to the task of 3D localization in road scenes.", "title": "" }, { "docid": "20b00a2cc472dfec851f4aea42578a9e", "text": "The self-regulatory strength model maintains that all acts of self-regulation, self-control, and choice result in a state of fatigue called ego-depletion. Self-determination theory differentiates between autonomous regulation and controlled regulation. Because making decisions represents one instance of self-regulation, the authors also differentiate between autonomous choice and controlled choice. Three experiments support the hypothesis that whereas conditions representing controlled choice would be egodepleting, conditions that represented autonomous choice would not. In Experiment 3, the authors found significant mediation by perceived self-determination of the relation between the choice condition (autonomous vs. controlled) and ego-depletion as measured by performance.", "title": "" }, { "docid": "f9d1fcca8fb8f83bdb2391d4fe0ba4ef", "text": "Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed.", "title": "" } ]
scidocsrr
725400ce7c5aebb6a73a49362a5ec61f
Credibility Assessment in the News : Do we need to read ?
[ { "docid": "a31ca7f2c2fce4a4f26d420f4aa91a91", "text": "Transition-based dependency parsers usually use transition systems that monotonically extend partial parse states until they identify a complete parse tree. Honnibal et al. (2013) showed that greedy onebest parsing accuracy can be improved by adding additional non-monotonic transitions that permit the parser to “repair” earlier parsing mistakes by “over-writing” earlier parsing decisions. This increases the size of the set of complete parse trees that each partial parse state can derive, enabling such a parser to escape the “garden paths” that can trap monotonic greedy transition-based dependency parsers. We describe a new set of non-monotonic transitions that permits a partial parse state to derive a larger set of completed parse trees than previous work, which allows our parser to escape from a larger set of garden paths. A parser with our new nonmonotonic transition system has 91.85% directed attachment accuracy, an improvement of 0.6% over a comparable parser using the standard monotonic arc-eager transitions.", "title": "" }, { "docid": "ee665e5a3d032a4e9b4e95cddac0f95c", "text": "On p. 219, we describe the data we collected from BuzzSumo as “the total number of times each article was shared on Facebook” (emph. added). In fact, the BuzzSumo data are the number of engagements with each article, defined as the sum of shares, comments, and other interactions such as “likes.” All references to counts of Facebook shares in the paper and the online appendix based on the BuzzSumo data should be replaced with references to counts of Facebook engagements. None of the tables or figures in either the paper or the online appendix are affected by this change, nor does the change affect the results based on our custom survey. None of the substantive conclusions of the paper are affected with one exception discussed below, where our substantive conclusion is strengthened. Examples of cases where the text should be changed:", "title": "" } ]
[ { "docid": "7ecba9c479a754ad55664bf8208643e0", "text": "One of the important problems that our society facing is people with disabilities which are finding hard to cope up with the fast growing technology. About nine billion people in the world are deaf and dumb. Communications between deaf-dumb and a normal person have always been a challenging task. Generally deaf and dumb people use sign language for communication, Sign language is an expressive and natural way for communication between normal and dumb people. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity, the artificial mouth is introduced for the dumb people. So, we need a translator to understand what they speak and communicate with us. Hence makes the communication between normal person and disabled people easier. This work aims to lower the barrier of disabled persons in communication. The main aim of this proposed work is to develop a cost effective system which can give voice to voiceless people with the help of Sign language. In the proposed work, the captured images are processed through MATLAB in PC and converted into speech through speaker and text in LCD by interfacing with Arduino. Keyword : Disabled people, Sign language, Image Processing, Arduino, LCD display, Speaker.", "title": "" }, { "docid": "e87a52f3e4f3c08838a2eff7501a12e5", "text": "A coordinated approach to digital forensic readiness (DFR) in a large organisation requires the management and monitoring of a wide variety of resources, both human and technical. The resources involved in DFR in large organisations typically include staff from multiple departments and business units, as well as network infrastructure and computing platforms. The state of DFR within large organisations may therefore be adversely affected if the myriad human and technical resources involved are not managed in an optimal manner. This paper contributes to DFR by proposing the novel concept of a digital forensic readiness management system (DFRMS). The purpose of a DFRMS is to assist large organisations in achieving an optimal level of management for DFR. In addition to this, we offer an architecture for a DFRMS. This architecture is based on requirements for DFR that we ascertained from an exhaustive review of the DFR literature. We describe the architecture in detail and show that it meets the requirements set out in the DFR literature. The merits and disadvantages of the architecture are also discussed. Finally, we describe and explain an early prototype of a DFRMS.", "title": "" }, { "docid": "fc9ddeeae99a4289d5b955c9ba90c682", "text": "In recent years there have been growing calls for forging greater connections between education and cognitive neuroscience.As a consequence great hopes for the application of empirical research on the human brain to educational problems have been raised. In this article we contend that the expectation that results from cognitive neuroscience research will have a direct and immediate impact on educational practice are shortsighted and unrealistic. Instead, we argue that an infrastructure needs to be created, principally through interdisciplinary training, funding and research programs that allow for bidirectional collaborations between cognitive neuroscientists, educators and educational researchers to grow.We outline several pathways for scaffolding such a basis for the emerging field of ‘Mind, Brain and Education’ to flourish as well as the obstacles that are likely to be encountered along the path.", "title": "" }, { "docid": "77e30fedf56545ba22ae9f1ef17b4dc9", "text": "Most of current self-checkout systems rely on barcodes, RFID tags, or QR codes attached on items to distinguish products. This paper proposes an Intelligent Self-Checkout System (ISCOS) embedded with a single camera to detect multiple products without any labels in real-time performance. In addition, deep learning skill is applied to implement product detection, and data mining techniques construct the image database employed as training dataset. Product information gathered from a number of markets in Taiwan is utilized to make recommendation to customers. The bounding boxes are annotated by background subtraction with a fixed camera to avoid time-consuming process for each image. The contribution of this work is to combine deep learning and data mining approaches to real-time multi-object detection in image-based checkout system.", "title": "" }, { "docid": "3c907a3e7ff704348e78239b2b54b917", "text": "Real-time traffic surveillance is essential in today’s intelligent transportation systems and will surely play a vital role in tomorrow’s smart cities. The work detailed in this paper reports on the development and implementation of a novel smart wireless sensor for traffic monitoring. Computationally efficient and reliable algorithms for vehicle detection, speed and length estimation, classification, and time-synchronization were fully developed, integrated, and evaluated. Comprehensive system evaluation and extensive data analysis were performed to tune and validate the system for a reliable and robust operation. Several field studies conducted on highway and urban roads for different scenarios and under various traffic conditions resulted in 99.98% detection accuracy, 97.11% speed estimation accuracy, and 97% length-based vehicle classification accuracy. The developed system is portable, reliable, and cost-effective. The system can also be used for short-term or long-term installment on surface of highway, roadway, and roadside. Implementation cost of a single node including enclosure is US $50.", "title": "" }, { "docid": "9f348ac8bae993ddf225f47dfa20182b", "text": "BACKGROUND\nTreatment of giant melanocytic nevi (GMN) remains a multidisciplinary challenge. We present analysis of diagnostics, treatment, and follow- up in children with GMN to establish obligatory procedures in these patients.\n\n\nMATERIAL/METHODS\nIn 24 children with GMN, we analyzed: localization, main nevus diameter, satellite nevi, brain MRI, catecholamines concentrations in 24-h urine collection, surgery stages number, and histological examinations. The t test was used to compare catecholamines concentrations in patient subgroups.\n\n\nRESULTS\nNine children had \"bathing trunk\" nevus, 7 had main nevus on the back, 6 on head/neck, and 2 on neck/shoulder and neck/thorax. Brain MRI revealed neurocutaneous melanosis (NCM) in 7/24 children (29.2%), symptomatic in 1. Among urine catecholamines levels from 20 patients (33 samples), dopamine concentration was elevated in 28/33, noradrenaline in 15, adrenaline in 11, and vanillylmandelic acid in 4. In 6 NCM children, all catecholamines concentrations were higher than in patients without NCM (statistically insignificant). In all patients, histological examination of excised nevi revealed compound nevus, with neurofibromatic component in 15 and melanoma in 2. They remain without recurrence/metastases at 8- and 3-year-follow-up. There were 4/7 NCM patients with more than 1 follow-up MRI; in 1 a new melanin deposit was found and in 3 there was no progression.\n\n\nCONCLUSIONS\nEarly excision with histological examination speeds the diagnosis of melanoma. Brain MRI is necessary to confirm/rule-out NCM. High urine dopamine concentration in GMN children, especially with NCM, is an unpublished finding that can indicate patients with more serious neurological disease. Treatment of GMN children should be tailored individually for each case with respect to all medical/psychological aspects.", "title": "" }, { "docid": "e8bbbc1864090b0246735868faa0e11f", "text": "A pre-trained deep convolutional neural network (DCNN) is the feed-forward computation perspective which is widely used for the embedded vision systems. In the DCNN, the 2D convolutional operation occupies more than 90% of the computation time. Since the 2D convolutional operation performs massive multiply-accumulation (MAC) operations, conventional realizations could not implement a fully parallel DCNN. The RNS decomposes an integer into a tuple of L integers by residues of moduli set. Since no pair of modulus have a common factor with any other, the conventional RNS decomposes the MAC unit into circuits with different sizes. It means that the RNS could not utilize resources of an FPGA with uniform size. In this paper, we propose the nested RNS (NRNS), which recursively decompose the RNS. It can decompose the MAC unit into circuits with small sizes. In the DCNN using the NRNS, a 48-bit MAC unit is decomposed into 4-bit ones realized by look-up tables of the FPGA. In the system, we also use binary to NRNS converters and NRNS to binary converters. The binary to NRNS converter is realized by on-chip BRAMs, while the NRNS to binary one is realized by DSP blocks and BRAMs. Thus, a balanced usage of FPGA resources leads to a high clock frequency with less hardware. The ImageNet DCNN using the NRNS is implemented on a Xilinx Virtex VC707 evaluation board. As for the performance per area GOPS (Giga operations per second) per a slice, the proposed one is 5.86 times better than the existing best realization.", "title": "" }, { "docid": "1ebb827b9baf3307bc20de78538d23e7", "text": "0747-5632/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2013.07.003 ⇑ Corresponding author. Address: University of North Texas, College of Business, 1155 Union Circle #311160, Denton, TX 76203-5017, USA. E-mail addresses: mohammad.salehan@unt.edu (M. Salehan), arash.negah ban@unt.edu (A. Negahban). 1 These authors contributed equally to the work. Mohammad Salehan 1,⇑, Arash Negahban 1", "title": "" }, { "docid": "d17f6ed783c0ec33e4c74171db82392b", "text": "Caffeic acid phenethyl ester, derived from natural propolis, has been reported to have anti-cancer properties. Voltage-gated sodium channels are upregulated in many cancers where they promote metastatic cell behaviours, including invasiveness. We found that micromolar concentrations of caffeic acid phenethyl ester blocked voltage-gated sodium channel activity in several invasive cell lines from different cancers, including breast (MDA-MB-231 and MDA-MB-468), colon (SW620) and non-small cell lung cancer (H460). In the MDA-MB-231 cell line, which was adopted as a 'model', long-term (48 h) treatment with 18 μM caffeic acid phenethyl ester reduced the peak current density by 91% and shifted steady-state inactivation to more hyperpolarized potentials and slowed recovery from inactivation. The effects of long-term treatment were also dose-dependent, 1 μM caffeic acid phenethyl ester reducing current density by only 65%. The effects of caffeic acid phenethyl ester on metastatic cell behaviours were tested on the MDA-MB-231 cell line at a working concentration (1 μM) that did not affect proliferative activity. Lateral motility and Matrigel invasion were reduced by up to 14% and 51%, respectively. Co-treatment of caffeic acid phenethyl ester with tetrodotoxin suggested that the voltage-gated sodium channel inhibition played a significant intermediary role in these effects. We conclude, first, that caffeic acid phenethyl ester does possess anti-metastatic properties. Second, the voltage-gated sodium channels, commonly expressed in strongly metastatic cancers, are a novel target for caffeic acid phenethyl ester. Third, more generally, ion channel inhibition can be a significant mode of action of nutraceutical compounds.", "title": "" }, { "docid": "b29f2d688e541463b80006fac19eaf20", "text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.", "title": "" }, { "docid": "be283056a8db3ab5b2481f3dc1f6526d", "text": "Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.", "title": "" }, { "docid": "460b8f82e5c378c7d866d92339e14572", "text": "When the number of projections does not satisfy the Shannon/Nyquist sampling requirement, streaking artifacts are inevitable in x-ray computed tomography (CT) images reconstructed using filtered backprojection algorithms. In this letter, the spatial-temporal correlations in dynamic CT imaging have been exploited to sparsify dynamic CT image sequences and the newly proposed compressed sensing (CS) reconstruction method is applied to reconstruct the target image sequences. A prior image reconstructed from the union of interleaved dynamical data sets is utilized to constrain the CS image reconstruction for the individual time frames. This method is referred to as prior image constrained compressed sensing (PICCS). In vivo experimental animal studies were conducted to validate the PICCS algorithm, and the results indicate that PICCS enables accurate reconstruction of dynamic CT images using about 20 view angles, which corresponds to an under-sampling factor of 32. This undersampling factor implies a potential radiation dose reduction by a factor of 32 in myocardial CT perfusion imaging.", "title": "" }, { "docid": "cbc6bd586889561cc38696f758ad97d2", "text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.", "title": "" }, { "docid": "443637fcc9f9efcf1026bb64aa0a9c97", "text": "Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modeling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack, with an emphasis on the physical layer.", "title": "" }, { "docid": "3b03af1736709e536a4a58363102bc60", "text": "Music transcription, as an essential component in music signal processing, contributes to wide applications in musicology, accelerates the development of commercial music industry, facilitates the music education as well as benefits extensive music lovers. However, the work relies on a lot of manual work due to heavy requirements on knowledge and experience. This project mainly examines two deep learning methods, DNN and LSTM, to automatize music transcription. We transform the audio files into spectrograms using constant Q transform and extract features from the spectrograms. Deep learning methods have the advantage of learning complex features in music transcription. The promising results verify that deep learning methods are capable of learning specific musical properties, including notes and rhythms. Keywords—automatic music transcription; deep learning; deep neural network (DNN); long shortterm memory networks (LSTM)", "title": "" }, { "docid": "3c4712f1c54f3d9d8d4297d9ab0b619f", "text": "In this paper, we introduce Cellular Automata-a dynamic evolution model to intuitively detect the salient object. First, we construct a background-based map using color and space contrast with the clustered boundary seeds. Then, a novel propagation mechanism dependent on Cellular Automata is proposed to exploit the intrinsic relevance of similar regions through interactions with neighbors. Impact factor matrix and coherence matrix are constructed to balance the influential power towards each cell's next state. The saliency values of all cells will be renovated simultaneously according to the proposed updating rule. It's surprising to find out that parallel evolution can improve all the existing methods to a similar level regardless of their original results. Finally, we present an integration algorithm in the Bayesian framework to take advantage of multiple saliency maps. Extensive experiments on six public datasets demonstrate that the proposed algorithm outperforms state-of-the-art methods.", "title": "" }, { "docid": "fdd998012aa9b76ba9fe4477796ddebb", "text": "Low-power wireless devices must keep their radio transceivers off as much as possible to reach a low power consumption, but must wake up often enough to be able to receive communication from their neighbors. This report describes the ContikiMAC radio duty cycling mechanism, the default radio duty cycling mechanism in Contiki 2.5, which uses a power efficient wake-up mechanism with a set of timing constraints to allow device to keep their transceivers off. With ContikiMAC, nodes can participate in network communication yet keep their radios turned off for roughly 99% of the time. This report describes the ContikiMAC mechanism, measures the energy consumption of individual ContikiMAC operations, and evaluates the efficiency of the fast sleep and phase-lock optimizations.", "title": "" }, { "docid": "df69a701bca12d3163857a9932ef51e2", "text": "Students often have their own individual laptop computers in university classes, and researchers debate the potential benefits and drawbacks of laptop use. In the presented research, we used a combination of surveys and in-class observations to study how students use their laptops in an unmonitored and unrestricted class setting—a large lecture-based university class with nearly 3000 enrolled students. By analyzing computer use over the duration of long (165 minute) classes, we demonstrate how computer use changes over time. The observations and studentreports provided similar descriptions of laptop activities. Note taking was the most common use for the computers, followed by the use of social media web sites. Overall, the data show that students engaged in off-task computer activities for nearly two-thirds of the time. An analysis of the frequency of the various laptop activities over time showed that engagement in individual activities varied significantly over the duration of the class.", "title": "" }, { "docid": "b513d1cbf3b2f649afcea4d0ab6784ac", "text": "RoboSimian is a quadruped robot inspired by an ape-like morphology, with four symmetric limbs that provide a large dexterous workspace and high torque output capabilities. Advantages of using RoboSimian for rough terrain locomotion include (1) its large, stable base of support, and (2) existence of redundant kinematic solutions, toward avoiding collisions with complex terrain obstacles. However, these same advantages provide significant challenges in experimental implementation of walking gaits. Specifically: (1) a wide support base results in high variability of required body pose and foothold heights, in particular when compared with planning for humanoid robots, (2) the long limbs on RoboSimian have a strong proclivity for self-collision and terrain collision, requiring particular care in trajectory planning, and (3) having rear limbs outside the field of view requires adequate perception with respect to a world map. In our results, we present a tractable means of planning statically stable and collision-free gaits, which combines practical heuristics for kinematics with traditional randomized (RRT) search algorithms. In planning experiments, our method outperforms other tested methodologies. Finally, real-world testing indicates that perception limitations provide the greatest challenge in real-world implementation.", "title": "" }, { "docid": "04d110e130c5d7dc56c2d8e63857e9aa", "text": "OBJECTIVE\nThis study aimed to assess weight bias among professionals who specialize in treating eating disorders and identify to what extent their weight biases are associated with attitudes about treating obese patients.\n\n\nMETHOD\nParticipants were 329 professionals treating eating disorders, recruited through professional organizations that specialize in eating disorders. Participants completed anonymous, online self-report questionnaires, assessing their explicit weight bias, perceived causes of obesity, attitudes toward treating obese patients, perceptions of treatment compliance and success of obese patients, and perceptions of weight bias among other practitioners.\n\n\nRESULTS\nNegative weight stereotypes were present among some professionals treating eating disorders. Although professionals felt confident (289; 88%) and prepared (276; 84%) to provide treatment to obese patients, the majority (184; 56%) had observed other professionals in their field making negative comments about obese patients, 42% (138) believed that practitioners who treat eating disorders often have negative stereotypes about obese patients, 35% (115) indicated that practitioners feel uncomfortable caring for obese patients, and 29% (95) reported that their colleagues have negative attitudes toward obese patients. Compared to professionals with less weight bias, professionals with stronger weight bias were more likely to attribute obesity to behavioral causes, expressed more negative attitudes and frustrations about treating obese patients, and perceived poorer treatment outcomes for these patients.\n\n\nDISCUSSION\nSimilar to other health disciplines, professionals treating eating disorders are not immune to weight bias. This has important implications for provision of clinical treatment with obese individuals and efforts to reduce weight bias in the eating disorders field.", "title": "" } ]
scidocsrr
90788d5ce593a102ea5586c4a2a894f2
Segmentation of volumetric MRA images by using capillary active contour
[ { "docid": "f3c2663cb0341576d754bb6cd5f2c0f5", "text": "This article surveys deformable models, a promising and vigorously researched computer-assisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics and approximation theory. They have proven to be effective in segmenting, matching and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) a priori knowledge about the location, size and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking.", "title": "" }, { "docid": "e7f1e8f82c91c7afd4d58c9987f3e95e", "text": "ÐA level set method for capturing the interface between two ¯uids is combined with a variable density projection method to allow for computation of a two-phase ¯ow where the interface can merge/ break and the ¯ow can have a high Reynolds number. A distance function formulation of the level set method enables us to compute ¯ows with large density ratios (1000/1) and ¯ows that are surface tension driven, with no emotional involvement. Recent work has improved the accuracy of the distance function formulation and the accuracy of the advection scheme. We compute ¯ows involving air bubbles and water drops, among others. We validate our code against experiments and theory. In Ref. [1] an Eulerian scheme was described for computing incompressible two-¯uid ¯ow where the density ratio across the interface is large (e.g. air/water) and both surface tension and vis-cous e€ects are included. In this paper, we modify our scheme improving both the accuracy and eciency of the algorithm. We use a level set function tòcapture' the air/water interface thus allowing us to eciently compute ¯ows with complex interfacial structure. In Ref. [1], a new iterative process was devised in order to maintain the level set function as the signed distance from the air/water interface. Since we know the distance from the interface at any point in the domain, we can give the interface a thickness of size O(h); this allows us to compute with sti€ surface tension e€ects and steep density gradients. We have since imposed a new`constraint' on the iterative process improving the accuracy of this process. We have also upgraded our scheme to using higher order ENO for spatial derivatives, and high order Runge±Kutta for the time dis-cretization (see Ref. [2]). An example of the problems we wish to solve is illustrated in Fig. 1. An air bubble rises up to the water surface and then`bursts', emitting a jet of water that eventually breaks up into satellite drops. It is a very dicult problem involving much interfacial complexity and sti€ surface tension e€ects. The density ratio at the interface is ca 1000/1. In Ref. [3], the boundary integral method was used to compute thèbubble-burst' problem and compared with experimental results. The boundary integral method is a very good method for inviscid air/water problems because, as a Lagrangian based scheme, only points on the interface need to be discretized. Unfortunately, if one wants to include the merging and breaking …", "title": "" }, { "docid": "d3a18f5ad29f2eddd7eb32c561389212", "text": "Interpretation of magnetic resonance angiography (MRA) is problematic due to complexities of vascular shape and to artifacts such as the partial volume effect. The authors present new methods to assist in the interpretation of MRA. These include methods for detection of vessel paths and for determination of branching patterns of vascular trees. They are based on the ordered region growing (ORG) algorithm that represents the image as an acyclic graph, which can be reduced to a skeleton by specifying vessel endpoints or by a pruning process. Ambiguities in the vessel branching due to vessel overlap are effectively resolved by heuristic methods that incorporate a priori knowledge of bifurcation spacing. Vessel paths are detected at interactive speeds on a 500-MHz processor using vessel endpoints. These methods apply best to smaller vessels where the image intensity peaks at the center of the lumen which, for the abdominal MRA, includes vessels whose diameter is less than 1 cm.", "title": "" } ]
[ { "docid": "24151cf5d4481ba03e6ffd1ca29f3441", "text": "The design, fabrication and characterization of 79 GHz slot antennas based on substrate integrated waveguides (SIW) are presented in this paper. All the prototypes are fabricated in a polyimide flex foil using printed circuit board (PCB) fabrication processes. A novel concept is used to minimize the leakage losses of the SIWs at millimeter wave frequencies. Different losses in the SIWs are analyzed. SIW-based single slot antenna, longitudinal and four-by-four slot array antennas are numerically and experimentally studied. Measurements of the antennas show approximately 4.7%, 5.4% and 10.7% impedance bandwidth (S11=-10 dB) with 2.8 dBi, 6.0 dBi and 11.0 dBi maximum antenna gain around 79 GHz, respectively. The measured results are in good agreement with the numerical simulations.", "title": "" }, { "docid": "e4d35033649087965951736fe7565d6d", "text": "Much recent work has explored the challenge of nonvisual text entry on mobile devices. While researchers have attempted to solve this problem with gestures, we explore a different modality: speech. We conducted a survey with 169 blind and sighted participants to investigate how often, what for, and why blind people used speech for input on their mobile devices. We found that blind people used speech more often and input longer messages than sighted people. We then conducted a study with 8 blind people to observe how they used speech input on an iPod compared with the on-screen keyboard with VoiceOver. We found that speech was nearly 5 times as fast as the keyboard. While participants were mostly satisfied with speech input, editing recognition errors was frustrating. Participants spent an average of 80.3% of their time editing. Finally, we propose challenges for future work, including more efficient eyes-free editing and better error detection methods for reviewing text.", "title": "" }, { "docid": "92583a036066d87f857ae1be2a9ed109", "text": "The OpenCog software development framework, for advancement of the development and testing of powerful and responsible integrative AGI, is described. The OpenCog Framework (OCF) 1.0, to be released in 2008 under the GPLv2, is comprised of a collection of portable libraries for OpenCog applications, plus an initial collection of cognitive algorithms that operate within the OpenCog framework. The OCF libraries include a flexible knowledge representation embodied in a scalable knowledge store, a cognitive process scheduler, and a plug-in architecture for allowing interaction between cognitive, perceptual, and control algorithms.", "title": "" }, { "docid": "d86517401c90186abb31895028d6f18b", "text": "The widespread of ultrasound as a guide to regional anesthesia has allowed the development of numerous alternatives to paravertebral block in breast surgery named fascial or myofascial blocks [1,2]. We chose to use a bilateral ultrasound-guided erector-spinae plane (ESP)blocks in a patient scheduled for breast cancer surgery that rejected epidural analgesia. We present this case report once obtained written informed consent from the patient. A 59-year-oldwoman, height 156 cmandweight 54 kg, ASA2 smoker with history of chronic hypertension and chronic obstructive pulmonary disease was scheduled for right subcutaneous mastectomy with nipple-areola skin sparing due to a breast cancer. A sentinel lymph-", "title": "" }, { "docid": "5192d78f1ea78f0bcaae0433357c25d7", "text": "The ISO 26262 standard defines functional safety for automotive E/E systems. Since the publication of the first edition of this standard in 2011, many different safety techniques complying to the ISO 26262 have been developed. However, it is not clear which parts and (sub-) phases of the standard are targeted by these techniques and which objectives of the standard are particularly addressed. Therefore, we carried out a gap analysis to identify gaps between the safety standard objectives of the part 3 till 7 and the existing techniques. In this paper the results of the gap analysis are presented such as we identified that there is a lack of mature tool support for the ASIL sub-phase and a need for a common platform for the entire product development cycle.", "title": "" }, { "docid": "3daa9fc7d434f8a7da84dd92f0665564", "text": "In this article we analyze the response of Time of Flight cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. Time of Flight sensors are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce some metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposures of the sensors. Performance of three different time of flight cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancellation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. stereo vision is more robust to ambient illumination and provides high resolution depth data but it is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of the ToF cameras for a scene involving both shadow and sunlight exposures at the same time using camera flags (PMD) or confidence matrix (SwissRanger).", "title": "" }, { "docid": "ae4974a3d7efedab7cd6651101987e79", "text": "Fisher Kernels and Deep Learning were two developments with significant impact on large-scale object categorization in the last years. Both approaches were shown to achieve state-of-the-art results on large-scale object categorization datasets, such as ImageNet. Conceptually, however, they are perceived as very different and it is not uncommon for heated debates to spring up when advocates of both paradigms meet at conferences or workshops. In this work, we emphasize the similarities between both architectures rather than their differences and we argue that such a unified view allows us to transfer ideas from one domain to the other. As a concrete example we introduce a method for learning a support vector machine classifier with Fisher kernel at the same time as a task-specific data representation. We reinterpret the setting as a multi-layer feed forward network. Its final layer is the classifier, parameterized by a weight vector, and the two previous layers compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture model. We introduce a gradient descent based learning algorithm that, in contrast to other feature learning techniques, is not just derived from intuition or biological analogy, but has a theoretical justification in the framework of statistical learning theory. Our experiments show that the new training procedure leads to significant improvements in classification accuracy while preserving the modularity and geometric interpretability of a support vector machine setup.", "title": "" }, { "docid": "72ca634d0236b25a943e60331b43f055", "text": "3D models derived from point clouds are useful in various shapes to optimize the trade-off between precision and geometric complexity. They are defined at different granularity levels according to each indoor situation. In this article, we present an integrated 3D semantic reconstruction framework that leverages segmented point cloud data and domain ontologies. Our approach follows a part-to-whole conception which models a point cloud in parametric elements usable per instance and aggregated to obtain a global 3D model. We first extract analytic features, object relationships and contextual information to permit better object characterization. Then, we propose a multi-representation modelling mechanism augmented by automatic recognition and fitting from the 3D library ModelNet10 to provide the best candidates for several 3D scans of furniture. Finally, we combine every element to obtain a consistent indoor hybrid 3D model. The method allows a wide range of applications from interior navigation to virtual stores.", "title": "" }, { "docid": "4d4540a59e637f9582a28ed62083bfd6", "text": "Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been shown that competitive accuracies can be achieved without using syntactic parsers, which can be highly inaccurate on noisy text such as tweets. This is achieved by applying distributed word representations and rich neural pooling functions over a simple and intuitive segmentation of tweets according to target entity mentions. In this paper, we extend this idea by proposing a sentencelevel neural model to address the limitation of pooling functions, which do not explicitly model tweet-level semantics. First, a bi-directional gated neural network is used to connect the words in a tweet so that pooling functions can be applied over the hidden layer instead of words for better representing the target and its contexts. Second, a three-way gated neural network structure is used to model the interaction between the target mention and its surrounding contexts. Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis.", "title": "" }, { "docid": "13f24b04e37c9e965d85d92e2c588c9a", "text": "In this paper we propose a new user purchase preference model based on their implicit feedback behavior. We analyze user behavior data to seek their purchase preference signals. We find that if a user has more purchase preference on a certain item he would tend to browse it for more times. It gives us an important inspiration that, not only purchasing behavior but also other types of implicit feedback like browsing behavior, can indicate user purchase preference. We further find that user purchase preference signals also exist in the browsing behavior of item categories. Therefore, when we want to predict user purchase preference for certain items, we can integrate these behavior types into our user preference model by converting such preference signals into numerical values. We evaluate our model on a real-world dataset from a shopping site in China. Results further validate that user purchase preference model in our paper can capture more and accurate user purchase preference information from implicit feedback and greatly improves the performance of user purchase prediction.", "title": "" }, { "docid": "5484ad5af4d1133683e95bc0178564f0", "text": "Two studies investigated the connection between narcissism and sensitivity to criticism. In study 1, participants completed the Narcissistic Personality Inventory (NPI) and the Sensitivity to Criticism Scale (SCS) and were asked to construct and deliver speeches to be rated by performance judges. They were then asked whether they would like to receive evaluative feedback. Narcissism and sensitivity to criticism were mildly, but not significantly, negatively correlated and had contrasting relationships with choices regarding feedback. Highly narcissistic participants tended to seek (rather than avoid) feedback, whereas highly sensitive participants tended to reject feedback opportunities. Study 2 examined the relationship between sensitivity to criticism and both overt and covert narcissism. Those scoring high on the trait narcissism, as measured by the NPI, tended to be less sensitive to criticism, sought (rather than avoided) feedback opportunities, experienced little internalized negative emotions in response to “extreme” feedback conditions, and did not expect to ruminate over their performance. By contrast, participants scoring high on a measure of “covert narcissism” were high in sensitivity to criticism, tended to avoid feedback opportunities, experienced high levels of internalized negative emotions, and showed high levels of expected rumination. These findings suggest that the relationship between narcissism and sensitivity to criticism is highly dependent upon the definition or “form” of narcissism considered.", "title": "" }, { "docid": "e0f797ff66a81b88bbc452e86864d7bc", "text": "A key challenge in radar micro-Doppler classification is the difficulty in obtaining a large amount of training data due to costs in time and human resources. Small training datasets limit the depth of deep neural networks (DNNs), and, hence, attainable classification accuracy. In this work, a novel method for diversifying Kinect-based motion capture (MOCAP) simulations of human micro-Doppler to span a wider range of potential observations, e.g. speed, body size, and style, is proposed. By applying three transformations, a small set of MOCAP measurements is expanded to generate a large training dataset for network initialization of a 30-layer deep residual neural network. Results show that the proposed training methodology and residual DNN yield improved bottleneck feature performance and the highest overall classification accuracy among other DNN architectures, including transfer learning from the 1.5 million sample ImageNet database.", "title": "" }, { "docid": "be8e1e4fd9b8ddb0fc7e1364455999e8", "text": "In this paper, we describe the development and exploitation of a corpus-based tool for the identification of metaphorical patterns in large datasets. The analysis of metaphor as a cognitive and cultural, rather than solely linguistic, phenomenon has become central as metaphor researchers working within ‘Cognitive Metaphor Theory’ have drawn attention to the presence of systematic and pervasive conventional metaphorical patterns in ‘ordinary’ language (e.g. I’m at a crossroads in my life). Cognitive Metaphor Theory suggests that these linguistic patterns reflect the existence of conventional conceptual metaphors, namely systematic cross-domain correspondences in conceptual structure (e.g. LIFE IS A JOURNEY). This theoretical approach, described further in section 2, has led to considerable advances in our understanding of metaphor both as a linguistic device and a cognitive model, and to our awareness of its role in many different genres and discourses. Although some recent research has incorporated corpus linguistic techniques into this framework for the analysis of metaphor, to date, such analyses have primarily involved the concordancing of pre-selected search strings (e.g. Deignan 2005). The method described in this paper represents an attempt to extend the limits of this form of analysis. In our approach, we have applied an existing semantic field annotation tool (USAS) developed at Lancaster University to aid metaphor researchers in searching corpora. We are able to filter all possible candidate semantic fields proposed by USAS to assist in finding possible ‘source’ (e.g. JOURNEY) and ‘target’ (e.g. LIFE) domains, and we can then go on to consider the potential metaphoricity of the expressions included under each possible source domain. This method thus enables us to identify open-ended sets of metaphorical expressions, which are not limited to predetermined search strings. In section 3, we present this emerging methodology for the computer-assisted analysis of metaphorical patterns in discourse. The semantic fields automatically annotated by USAS can be seen as roughly corresponding to the domains of metaphor theory. We have used USAS in combination with key word and domain techniques in Wmatrix (Rayson, 2003) to replicate earlier manual analyses, e.g. machine metaphors in Ken Kesey’s One Flew Over the Cuckoo’s Nest (Semino and Swindlehurst, 1996) and war, machine and organism metaphors in business magazines (Koller, 2004a). These studies are described in section 4.", "title": "" }, { "docid": "7d014f64578943f8ec8e5e27d313e148", "text": "In this paper, we extend the Divergent Component of Motion (DCM, also called `Capture Point') to 3D. We introduce the “Enhanced Centroidal Moment Pivot point” (eCMP) and the “Virtual Repellent Point” (VRP), which allow for the encoding of both direction and magnitude of the external (e.g. leg) forces and the total force (i.e. external forces plus gravity) acting on the robot. Based on eCMP, VRP and DCM, we present a method for real-time planning and control of DCM trajectories in 3D. We address the problem of underactuation and propose methods to guarantee feasibility of the finally commanded forces. The capabilities of the proposed control framework are verified in simulations.", "title": "" }, { "docid": "6fc870c703611e07519ce5fe956c15d1", "text": "Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.", "title": "" }, { "docid": "a968a9842bb49f160503b24bff57cdd6", "text": "This paper addresses target discrimination in synthetic aperture radar (SAR) imagery using linear and nonlinear adaptive networks. Neural networks are extensively used for pattern classification but here the goal is discrimination. We show that the two applications require different cost functions. We start by analyzing with a pattern recognition perspective the two-parameter constant false alarm rate (CFAR) detector which is widely utilized as a target detector in SAR. Then we generalize its principle to construct the quadratic gamma discriminator (QGD), a nonparametrically trained classifier based on local image intensity. The linear processing element of the QCD is further extended with nonlinearities yielding a multilayer perceptron (MLP) which we call the NL-QGD (nonlinear QGD). MLPs are normally trained based on the L(2) norm. We experimentally show that the L(2) norm is not recommended to train MLPs for discriminating targets in SAR. Inspired by the Neyman-Pearson criterion, we create a cost function based on a mixed norm to weight the false alarms and the missed detections differently. Mixed norms can easily be incorporated into the backpropagation algorithm, and lead to better performance. Several other norms (L(8), cross-entropy) are applied to train the NL-QGD and all outperformed the L(2) norm when validated by receiver operating characteristics (ROC) curves. The data sets are constructed from TABILS 24 ISAR targets embedded in 7 km(2) of SAR imagery (MIT/LL mission 90).", "title": "" }, { "docid": "dd3d8d5d623a4bed6fb0939e15caa056", "text": "This paper investigates a number of computational intelligence techniques in the detection of heart disease. Particularly, comparison of six well known classifiers for the well used Cleveland data is performed. Further, this paper highlights the potential of an expert judgment based (i.e., medical knowledge driven) feature selection process (termed as MFS), and compare against the generally employed computational intelligence based feature selection mechanism. Also, this article recognizes that the publicly available Cleveland data becomes imbalanced when considering binary classification. Performance of classifiers, and also the potential of MFS are investigated considering this imbalanced data issue. The experimental results demonstrate that the use of MFS noticeably improved the performance, especially in terms of accuracy, for most of the classifiers considered and for majority of the datasets (generated by converting the Cleveland dataset for binary classification). MFS combined with the computerized feature selection process (CFS) has also been investigated and showed encouraging results particularly for NaiveBayes, IBK and SMO. In summary, the medical knowledge based feature selection method has shown promise for use in heart disease diagnostics. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "615a24719fe4300ea8971e86014ed8fe", "text": "This paper presents a new code for the analysis of gamma spectra generated by an equipment for continuous measurement of gamma radioactivity in aerosols with paper filter. It is called pGamma and has been developed by the Nuclear Engineering Research Group at the Technical University of Catalonia - Barcelona Tech and by Raditel Serveis i Subministraments Tecnològics, Ltd. The code has been developed to identify the gamma emitters and to determine their activity concentration. It generates alarms depending on the activity of the emitters and elaborates reports. Therefore it includes a library with NORM and artificial emitters of interest. The code is being adapted to the monitors of the Environmental Radiological Surveillance Network of the local Catalan Government in Spain (Generalitat de Catalunya) and is used at three stations of the Network.", "title": "" }, { "docid": "f391c56dd581d965548062944200e95f", "text": "We present a traceability recovery method and tool based on latent semantic indexing (LSI) in the context of an artefact management system. The tool highlights the candidate links not identified yet by the software engineer and the links identified but missed by the tool, probably due to inconsistencies in the usage of domain terms in the traced software artefacts. We also present a case study of using the traceability recovery tool on software artefacts belonging to different categories of documents, including requirement, design, and testing documents, as well as code components.", "title": "" }, { "docid": "3cab403ffab3e44252174ab5d7d985f8", "text": "A prominent parallel data processing tool MapReduce is gaining significant momentum from both industry and academia as the volume of data to analyze grows rapidly. While MapReduce is used in many areas where massive data analysis is required, there are still debates on its performance, efficiency per node, and simple abstraction. This survey intends to assist the database and open source communities in understanding various technical aspects of the MapReduce framework. In this survey, we characterize the MapReduce framework and discuss its inherent pros and cons. We then introduce its optimization strategies reported in the recent literature. We also discuss the open issues and challenges raised on parallel data analysis with MapReduce.", "title": "" } ]
scidocsrr
6328f332b11863c1a18b27f9b2021915
The BSD Packet Filter: A New Architecture for User-level Packet Capture
[ { "docid": "ea33b26333eaa1d92f3c42688eb8aba5", "text": "Code to implement network protocols can be either inside the kernel of an operating system or in user-level processes. Kernel-resident code is hard to develop, debug, and maintain, but user-level implementations typically incur significant overhead and perform poorly.\nThe performance of user-level network code depends on the mechanism used to demultiplex received packets. Demultiplexing in a user-level process increases the rate of context switches and system calls, resulting in poor performance. Demultiplexing in the kernel eliminates unnecessary overhead.\nThis paper describes the packet filter, a kernel-resident, protocol-independent packet demultiplexer. Individual user processes have great flexibility in selecting which packets they will receive. Protocol implementations using the packet filter perform quite well, and have been in production use for several years.", "title": "" } ]
[ { "docid": "a7db9f3f1bb5883f6a5a873dd661867b", "text": "Psychologists and sociologists usually interpret happiness scores as cardinal and comparable across respondents, and thus run OLS regressions on happiness and changes in happiness. Economists usually assume only ordinality and have mainly used ordered latent response models, thereby not taking satisfactory account of fixed individual traits. We address this problem by developing a conditional estimator for the fixed-effect ordered logit model. We find that assuming ordinality or cardinality of happiness scores makes little difference, whilst allowing for fixed-effects does change results substantially. We call for more research into the determinants of the personality traits making up these fixed-effects.", "title": "" }, { "docid": "083d5b88cc1bf5490a0783a4a94e9fb2", "text": "Taking care and maintenance of a healthy population is the Strategy of each country. Information and communication technologies in the health care system have led to many changes in order to improve the quality of health care services to patients, rational spending time and reduce costs. In the booming field of IT research, the reach of drug delivery, information on grouping of similar drugs has been lacking. The wealth distribution and drug affordability at a certain demographic has been interlinked and proposed in this paper. Looking at the demographic we analyze and group the drugs based on target action and link this to the wealth and the people to medicine ratio, which can be accomplished via data mining and web mining. The data thus mined will be analysed and made available to public and commercial purpose for their further knowledge and benefit.", "title": "" }, { "docid": "df62526aa79eb750790bd48254171faf", "text": "SUMMARY Non-safety critical software developers have been reaping the benefits of adopting agile practices for a number of years. However, developers of safety critical software often have concerns about adopting agile practices. Through performing a literature review, this research has identified the perceived barriers to following agile practices when developing medical device software. A questionnaire based survey was also conducted with medical device software developers in Ireland to determine the barriers to adopting agile practices. The survey revealed that half of the respondents develop software in accordance with a plan driven software development lifecycle and that they believe that there are a number of perceived barriers to adopting agile practices when developing regulatory compliant software such as: being contradictory to regulatory requirements; insufficient coverage of risk management activities and the lack of up-front planning. In addition, a comparison is performed between the perceived and actual barriers. Based upon the findings of the literature review and survey, it emerged that no external barriers exist to adopting agile practices when developing medical device software and the barriers that do exists are internal barriers such as getting stakeholder buy in.", "title": "" }, { "docid": "0c8517bab8a8fa34f25a72cf6c971b25", "text": "Automotive radar sensors are key components for driver assistant systems. In order to handle complex traffic scenarios an advanced separability is required with respect to object angle, distance and velocity. In this contribution a highly integrated automotive radar sensor enabling chirp sequence modulation will be presented and discussed. Furthermore, the development of a target simulator which is essential for the characterization of such radar sensors will be introduced including measurements demonstrating the performance of our system.", "title": "" }, { "docid": "70509b891a45c8cdd0f2ed02207af06f", "text": "This paper presents an algorithm for drawing a sequence of graphs online. The algorithm strives to maintain the global structure of the graph and, thus, the user's mental map while allowing arbitrary modifications between consecutive layouts. The algorithm works online and uses various execution culling methods in order to reduce the layout time and handle large dynamic graphs. Techniques for representing graphs on the GPU allow a speedup by a factor of up to 17 compared to the CPU implementation. The scalability of the algorithm across GPU generations is demonstrated. Applications of the algorithm to the visualization of discussion threads in Internet sites and to the visualization of social networks are provided.", "title": "" }, { "docid": "a94ad02ca81d7c4a25eaf9d37c8c3ef0", "text": "The use of mobile technologies has recently received great attention in language learning. Most research evaluates the effects of employing mobile devices in language learning and explores the design of mobile-learning interventions that can maximize the benefits of new technologies. However, it is still unclear whether the use of mobile devices in language learning is more effective than other instructional approaches. It is also not clear whether the effects of mobile-device use vary in different settings. Our meta-analysis will explore these questions about mobile technology use in language learning. Based on the specific inclusion and exclusion criteria, 22 d-type effect sizes from 20 studies were calculated for the meta-analysis. We adopted the random-effects model, and the estimated average effect was 0.51 (se = 0.10). This is a moderate positive overall effect of using mobile devices on language acquisition and language-learning achievement. Moderator analyses under the mixed-effects model examined six features; effects varied significantly only by test type and source of the study. The overall effect and the effects of these moderators of mobile-device use on achievement in language learning are discussed.", "title": "" }, { "docid": "ca41837dd01a66259854c03b820a46ff", "text": "We present a supervised sequence to sequence transduction model with a hard attention mechanism which combines the more traditional statistical alignment methods with the power of recurrent neural networks. We evaluate the model on the task of morphological inflection generation and show that it provides state of the art results in various setups compared to the previous neural and non-neural approaches. Eventually we present an analysis of the learned representations for both hard and soft attention models, shedding light on the features such models extract in order to solve the task.", "title": "" }, { "docid": "333bd26d16544377536a6c96168439b7", "text": "Mate retention is an important problem in romantic relationships because of mate poachers, infidelity, and the risk of outright defection. The current study (N=892) represents the first study of mate retention tactics conducted in Spain. We tested hypotheses about the effects of gender, relationship commitment status, and personality on mate retention tactics. Women and men differed in the use of resource display, appearance enhancement, intrasexual violence, and submission/self-abasement as mate retention tactics. Those in more committed relationships reported higher levels of resource display, appearance enhancement, love, and verbal signals of possession. Those in less committed relationships more often reported intentionally evoking jealousy in their partner as a mate retention tactic. Personality characteristics, particularly Neuroticism and Agreeableness, correlated in coherent ways with mate retention tactics, supporting two evolution-based hypotheses. Discussion focuses on the implications, future research directions, and interdisciplinary syntheses emerging between personality and social psychology and evolutionary psychology.", "title": "" }, { "docid": "59d194764511b1ad2ce0ca5d858fab21", "text": "Humanoid robot path finding is one of the core-technologies in robot research domain. This paper presents an approach to finding a path for robot motion by fusing images taken by the NAO's camera and proximity information delivered by sonar sensors. The NAO robot takes an image around its surroundings, uses the fuzzy color extractor to segment its potential path colors, and selects a fitting line as path by the least squares method. Therefore, the NAO robot is able to perform the automatic navigation according to the selected path. As a result, the experiments are conducted to navigate the NAO robot to walk to a given destination and to grasp a box. In addition, the NAO robot uses its sonar sensors to detect a barrier and helps pick up the box with its hands.", "title": "" }, { "docid": "e5c6debcbbb979a18ca13f7739043174", "text": "Recurrent neural networks and sequence to sequence models require a predetermined length for prediction output length. Our model addresses this by allowing the network to predict a variable length output in inference. A new loss function with a tailored gradient computation is developed that trades off prediction accuracy and output length. The model utilizes a function to determine whether a particular output at a time should be evaluated or not given a predetermined threshold. We evaluate the model on the problem of predicting the prices of securities. We find that the model makes longer predictions for more stable securities and it naturally balances prediction accuracy and length.", "title": "" }, { "docid": "ba35d998ee00110e8d571730811972f9", "text": "Argument mining of online interactions is in its infancy. One reason is the lack of annotated corpora in this genre. To make progress, we need to develop a principled and scalable way of determining which portions of texts are argumentative and what is the nature of argumentation. We propose a two-tiered approach to achieve this goal and report on several initial studies to assess its potential.", "title": "" }, { "docid": "64fd862582693e030c88418a1dcf4c54", "text": "Anthropomorphic persuasive appeals are prevalent. However, their effectiveness has not been well studied. The present research addresses this issue with two experiments in the context of environmental persuasion. It shows that anthropomorphic messages, relative to non-anthropomorphic ones, appear to motivate more conservation behaviour and elicit more favourable message responses only among recipients who have a strong need for effectance or social connection. Among recipients whose such need is weak, anthropomorphic appeals seem to backfire. These findings extend the research on motivation and persuasion and add evidence to the motivational bases of anthropomorphism. In addition, joining some recent studies, the present research highlights the implications of anthropomorphism of nature for environmental conservation efforts, and offers some practical suggestions for environmental persuasion.", "title": "" }, { "docid": "55dc046b0052658521d627f29bcd7870", "text": "The proliferation of IT and its consequent dispersion is an enterprise reality, however, most organizations do not have adequate tools and/or methodologies that enable the management and coordination of their Information Systems. The Zachman Framework provides a structured way for any organization to acquire the necessary knowledge about itself with respect to the Enterprise Architecture. Zachman proposes a logical structure for classifying and organizing the descriptive representations of an enterprise, in different dimensions, and each dimension can be perceived in different perspectives.In this paper, we propose a method for achieving an Enterprise Architecture Framework, based on the Zachman Framework Business and IS perspectives, that defines the several artifacts for each cell, and a method which defines the sequence of filling up each cell in a top-down and incremental approach. We also present a tool developed for the purpose of supporting the Zachman Framework concepts. The tool: (i) behaves as an information repository for the framework's concepts; (ii) produces the proposed artifacts that represent each cell contents, (iii) allows multi-dimensional analysis among cell's elements, which is concerned with perspectives (rows) and/or dimensions (columns) dependency; and (iv) finally, evaluate the integrity, dependency and, business and information systems alignment level, through the answers defined for each framework dimension.", "title": "" }, { "docid": "e813eadbd5c8942f5ab01fdeda85c023", "text": "Imagination is considered an important component of the creative process, and many psychologists agree that imagination is based on our perceptions, experiences, and conceptual knowledge, recombining them into novel ideas and impressions never before experienced. As an attempt to model this account of imagination, we introduce the Associative Conceptual Imagination (ACI) framework that uses associative memory models in conjunction with vector space models. ACI is a framework for learning conceptual knowledge and then learning associations between those concepts and artifacts, which facilitates imagining and then creating new and interesting artifacts. We discuss the implications of this framework, its creative potential, and possible ways to implement it in practice. We then demonstrate an initial prototype that can imagine and then generate simple images.", "title": "" }, { "docid": "8858053a805375aba9d8e71acfd7b826", "text": "With the accelerating rate of globalization, business exchanges are carried out cross the border, as a result there is a growing demand for talents professional both in English and Business. We can see that at present Business English courses are offered by many language schools in the aim of meeting the need for Business English talent. Many researchers argue that no differences can be defined between Business English teaching and General English teaching. However, this paper concludes that Business English is different from General English at least in such aspects as in the role of teacher, in course design, in teaching models, etc., thus different teaching methods should be applied in order to realize expected teaching goals.", "title": "" }, { "docid": "40dc2dc28dca47137b973757cdf3bf34", "text": "In this paper we propose a new word-order based graph representation for text. In our graph representation vertices represent words or phrases and edges represent relations between contiguous words or phrases. The graph representation also includes dependency information. Our text representation is suitable for applications involving the identification of relevance or paraphrases across texts, where word-order information would be useful. We show that this word-order based graph representation performs better than a dependency tree representation while identifying the relevance of one piece of text to another.", "title": "" }, { "docid": "1e0eade3cc92eb79160aeac35a3a26d1", "text": "Global environmental concerns and the escalating demand for energy, coupled with steady progress in renewable energy technologies, are opening up new opportunities for utilization of renewable energy vailable online 12 January 2011", "title": "" }, { "docid": "6990c4f7bde94cb0e14245872e670f91", "text": "The UK's recent move to polymer banknotes has seen some of the currently used fingermark enhancement techniques for currency potentially become redundant, due to the surface characteristics of the polymer substrates. Possessing a non-porous surface with some semi-porous properties, alternate processes are required for polymer banknotes. This preliminary investigation explored the recovery of fingermarks from polymer notes via vacuum metal deposition using elemental copper. The study successfully demonstrated that fresh latent fingermarks, from an individual donor, could be clearly developed and imaged in the near infrared. By varying the deposition thickness of the copper, the contrast between the fingermark minutiae and the substrate could be readily optimised. Where the deposition thickness was thin enough to be visually indistinguishable, forensic gelatin lifters could be used to lift the fingermarks. These lifts could then be treated with rubeanic acid to produce a visually distinguishable mark. The technique has shown enough promise that it could be effectively utilised on other semi- and non-porous substrates.", "title": "" }, { "docid": "6018c72660f9fd8f3d073febb4b54043", "text": "Watershed Transformation in mathematical morphology is a powerful tool for image segmentation. Watershed transformation based segmentation is generally marker controlled segmentation. This paper purposes a novel method of image segmentation that includes image enhancement and noise removal techniques with the Prewitt’s edge detection operator. The proposed method is evaluated and compared to existing method. The results show that the proposed method could effectively reduce the over segmentation effect and achieve more accurate segmentation results than the existing method.", "title": "" }, { "docid": "b3abdcc994bdccde066f35dc863dc542", "text": "This paper outlines the development of a wearable game controller incorporating vibrotacticle haptic feedback that provides a low cost, versatile and intuitive interface for controlling digital games. The device differs from many traditional haptic feedback implementation in that it combines vibrotactile based haptic feedback with gesture based input, thus becoming a two way conduit between the user and the virtual environment. The device is intended to challenge what is considered an “interface” and draws on work in the area of Actor-Network theory to purposefully blur the boundary between man and machine. This allows for a more immersive experience, so rather than making the user feel like they are controlling an aircraft the intuitive interface allows the user to become the aircraft that is controlled by the movements of the user's hand. This device invites playful action and thrill. It bridges new territory on portable and low cost solutions for haptic controllers in a gaming context.", "title": "" } ]
scidocsrr
c3c305f1b0114c46ec4ca620701ce52b
Organizational change and development.
[ { "docid": "4a536c1186a1d1d1717ec1e0186b262c", "text": "In this paper, I outline a perspective on organizational transformation which proposes change as endemic to the practice of organizing and hence as enacted through the situated practices of organizational actors as they improvise, innovate, and adjust their work routines over time. I ground this perspective in an empirical study which examined the use of a new information technology within one organization over a two year period. In this organization, a series of subtle but nonetheless significant changes were enacted over time as organizational actors appropriated the new technology into their work practices, and then experimented with local innovations, responded to unanticipated breakdowns and contingencies, initiated opportunistic shifts in structure and coordination mechanisms, and improvised various procedural, cognitive, and normative variations to accommodate their evolving use of the technology. These findings provide the empirical basis for a practice-based perspective on organizational transformation. Because it is grounded in the micro-level changes that actors enact over time as they make sense of and act in the world, a practice lens can avoid the strong assumptions of rationality, determinism, or discontinuity characterizing existing change perspectives. A situated change perspective may offer a particularly useful strategy for analyzing change in organizations turning increasingly away from patterns of stability, bureaucracy, and control to those of flexibility, selforganizing, and learning.", "title": "" } ]
[ { "docid": "51c82ab631167a61e553e1ab8e34a385", "text": "The social and political context of sexual identity development in the United States has changed dramatically since the mid twentieth century. Same-sex attracted individuals have long needed to reconcile their desire with policies of exclusion, ranging from explicit outlaws on same-sex activity to exclusion from major social institutions such as marriage. This paper focuses on the implications of political exclusion for the life course of individuals with same-sex desire through the analytic lens of narrative. Using illustrative evidence from a study of autobiographies of gay men spanning a 60-year period and a study of the life stories of contemporary same-sex attracted youth, we detail the implications of historic silence, exclusion, and subordination for the life course.", "title": "" }, { "docid": "a5bd062a1ed914fb2effc924e41a4f73", "text": "With the developments and applications of the new information technologies, such as cloud computing, Internet of Things, big data, and artificial intelligence, a smart manufacturing era is coming. At the same time, various national manufacturing development strategies have been put forward, such as Industry 4.0, Industrial Internet, manufacturing based on Cyber-Physical System, and Made in China 2025. However, one of specific challenges to achieve smart manufacturing with these strategies is how to converge the manufacturing physical world and the virtual world, so as to realize a series of smart operations in the manufacturing process, including smart interconnection, smart interaction, smart control and management, etc. In this context, as a basic unit of manufacturing, shop-floor is required to reach the interaction and convergence between physical and virtual spaces, which is not only the imperative demand of smart manufacturing, but also the evolving trend of itself. Accordingly, a novel concept of digital twin shop-floor (DTS) based on digital twin is explored and its four key components are discussed, including physical shop-floor, virtual shop-floor, shop-floor service system, and shop-floor digital twin data. What is more, the operation mechanisms and implementing methods for DTS are studied and key technologies as well as challenges ahead are investigated, respectively.", "title": "" }, { "docid": "cc6161fd350ac32537dc704cbfef2155", "text": "The contribution of cloud computing and mobile computing technologies lead to the newly emerging mobile cloud computing paradigm. Three major approaches have been proposed for mobile cloud applications: 1) extending the access to cloud services to mobile devices; 2) enabling mobile devices to work collaboratively as cloud resource providers; 3) augmenting the execution of mobile applications on portable devices using cloud resources. In this paper, we focus on the third approach in supporting mobile data stream applications. More specifically, we study how to optimize the computation partitioning of a data stream application between mobile and cloud to achieve maximum speed/throughput in processing the streaming data.\n To the best of our knowledge, it is the first work to study the partitioning problem for mobile data stream applications, where the optimization is placed on achieving high throughput of processing the streaming data rather than minimizing the makespan of executions as in other applications. We first propose a framework to provide runtime support for the dynamic computation partitioning and execution of the application. Different from existing works, the framework not only allows the dynamic partitioning for a single user but also supports the sharing of computation instances among multiple users in the cloud to achieve efficient utilization of the underlying cloud resources. Meanwhile, the framework has better scalability because it is designed on the elastic cloud fabrics. Based on the framework, we design a genetic algorithm for optimal computation partition. Both numerical evaluation and real world experiment have been performed, and the results show that the partitioned application can achieve at least two times better performance in terms of throughput than the application without partitioning.", "title": "" }, { "docid": "c0559cebfad123a67777868990d40c7e", "text": "One of the attractive methods for providing natural human-computer interaction is the use of the hand as an input device rather than the cumbersome devices such as keyboards and mice, which need the user to be located in a specific location to use these devices. Since human hand is an articulated object, it is an open issue to discuss. The most important thing in hand gesture recognition system is the input features, and the selection of good features representation. This paper presents a review study on the hand postures and gesture recognition methods, which is considered to be a challenging problem in the human-computer interaction context and promising as well. Many applications and techniques were discussed here with the explanation of system recognition framework and its main phases.", "title": "" }, { "docid": "e3db1429e8821649f35270609459cb0d", "text": "Novelty detection is the task of recognising events the differ from a model of normality. This paper proposes an acoustic novelty detector based on neural networks trained with an adversarial training strategy. The proposed approach is composed of a feature extraction stage that calculates Log-Mel spectral features from the input signal. Then, an autoencoder network, trained on a corpus of “normal” acoustic signals, is employed to detect whether a segment contains an abnormal event or not. A novelty is detected if the Euclidean distance between the input and the output of the autoencoder exceeds a certain threshold. The innovative contribution of the proposed approach resides in the training procedure of the autoencoder network: instead of using the conventional training procedure that minimises only the Minimum Mean Squared Error loss function, here we adopt an adversarial strategy, where a discriminator network is trained to distinguish between the output of the autoencoder and data sampled from the training corpus. The autoencoder, then, is trained also by using the binary cross-entropy loss calculated at the output of the discriminator network. The performance of the algorithm has been assessed on a corpus derived from the PASCAL CHiME dataset. The results showed that the proposed approach provides a relative performance improvement equal to 0.26% compared to the standard autoencoder. The significance of the improvement has been evaluated with a one-tailed z-test and resulted significant with p < 0.001. The presented approach thus showed promising results on this task and it could be extended as a general training strategy for autoencoders if confirmed by additional experiments.", "title": "" }, { "docid": "6b0e2a151fd9aa53a97884d3f6b34c33", "text": "Building systems that possess the sensitivity and intelligence to identify and describe high-level attributes in music audio signals continues to be an elusive goal but one that surely has broad and deep implications for a wide variety of applications. Hundreds of articles have so far been published toward this goal, and great progress appears to have been made. Some systems produce remarkable accuracies at recognizing high-level semantic concepts, such as music style, genre, and mood. However, it might be that these numbers do not mean what they seem. In this article, we take a state-of-the-art music content analysis system and investigate what causes it to achieve exceptionally high performance in a benchmark music audio dataset. We dissect the system to understand its operation, determine its sensitivities and limitations, and predict the kinds of knowledge it could and could not possess about music. We perform a series of experiments to illuminate what the system has actually learned to do and to what extent it is performing the intended music listening task. Our results demonstrate how the initial manifestation of music intelligence in this state of the art can be deceptive. Our work provides constructive directions toward developing music content analysis systems that can address the music information and creation needs of real-world users.", "title": "" }, { "docid": "69049d1f5a3b14bb00d57d16a93ec47f", "text": "The porphyrias are disorders of haem biosynthesis which present with acute neurovisceral attacks or disorders of sun-exposed skin. Acute attacks occur mainly in adults and comprise severe abdominal pain, nausea, vomiting, autonomic disturbance, central nervous system involvement and peripheral motor neuropathy. Cutaneous porphyrias can be acute or chronic presenting at various ages. Timely diagnosis depends on clinical suspicion leading to referral of appropriate samples for screening by reliable biochemical methods. All samples should be protected from light. Investigation for an acute attack: • Porphobilinogen (PBG) quantitation in a random urine sample collected during symptoms. Urine concentration must be assessed by measuring creatinine, and a repeat requested if urine creatinine <2 mmol/L. • Urgent porphobilinogen testing should be available within 24 h of sample receipt at the local laboratory. Urine porphyrin excretion (TUP) should subsequently be measured on this urine. • Urine porphobilinogen should be measured using a validated quantitative ion-exchange resin-based method or LC-MS. • Increased urine porphobilinogen excretion requires confirmatory testing and clinical advice from the National Acute Porphyria Service. • Identification of individual acute porphyrias requires analysis of urine, plasma and faecal porphyrins. Investigation for cutaneous porphyria: • An EDTA blood sample for plasma porphyrin fluorescence emission spectroscopy and random urine sample for TUP. • Whole blood for porphyrin analysis is essential to identify protoporphyria. • Faeces need only be collected, if first-line tests are positive or if clinical symptoms persist. Investigation for latent porphyria or family history: • Contact a specialist porphyria laboratory for advice. Clinical, family details are usually required.", "title": "" }, { "docid": "296ce1f0dd7bf02c8236fa858bb1957c", "text": "As many as one in 20 people in Europe and North America have some form of autoimmune disease. These diseases arise in genetically predisposed individuals but require an environmental trigger. Of the many potential environmental factors, infections are the most likely cause. Microbial antigens can induce cross-reactive immune responses against self-antigens, whereas infections can non-specifically enhance their presentation to the immune system. The immune system uses fail-safe mechanisms to suppress infection-associated tissue damage and thus limits autoimmune responses. The association between infection and autoimmune disease has, however, stimulated a debate as to whether such diseases might also be triggered by vaccines. Indeed there are numerous claims and counter claims relating to such a risk. Here we review the mechanisms involved in the induction of autoimmunity and assess the implications for vaccination in human beings.", "title": "" }, { "docid": "617d1d0900ddebb431ae8fe37ad2e23b", "text": "We used cDNA microarrays to assess gene expression profiles in 60 human cancer cell lines used in a drug discovery screen by the National Cancer Institute. Using these data, we linked bioinformatics and chemoinformatics by correlating gene expression and drug activity patterns in the NCI60 lines. Clustering the cell lines on the basis of gene expression yielded relationships very different from those obtained by clustering the cell lines on the basis of their response to drugs. Gene-drug relationships for the clinical agents 5-fluorouracil and L-asparaginase exemplify how variations in the transcript levels of particular genes relate to mechanisms of drug sensitivity and resistance. This is the first study to integrate large databases on gene expression and molecular pharmacology.", "title": "" }, { "docid": "40c4175be1573d9542f6f9f859fafb01", "text": "BACKGROUND\nFalls are a major threat to the health and independence of seniors. Regular physical activity (PA) can prevent 40% of all fall injuries. The challenge is to motivate and support seniors to be physically active. Persuasive systems can constitute valuable support for persons aiming at establishing and maintaining healthy habits. However, these systems need to support effective behavior change techniques (BCTs) for increasing older adults' PA and meet the senior users' requirements and preferences. Therefore, involving users as codesigners of new systems can be fruitful. Prestudies of the user's experience with similar solutions can facilitate future user-centered design of novel persuasive systems.\n\n\nOBJECTIVE\nThe aim of this study was to investigate how seniors experience using activity monitors (AMs) as support for PA in daily life. The addressed research questions are as follows: (1) What are the overall experiences of senior persons, of different age and balance function, in using wearable AMs in daily life?; (2) Which aspects did the users perceive relevant to make the measurements as meaningful and useful in the long-term perspective?; and (3) What needs and requirements did the users perceive as more relevant for the activity monitors to be useful in a long-term perspective?\n\n\nMETHODS\nThis qualitative interview study included 8 community-dwelling older adults (median age: 83 years). The participants' experiences in using two commercial AMs together with tablet-based apps for 9 days were investigated. Activity diaries during the usage and interviews after the usage were exploited to gather user experience. Comments in diaries were summarized, and interviews were analyzed by inductive content analysis.\n\n\nRESULTS\nThe users (n=8) perceived that, by using the AMs, their awareness of own PA had increased. However, the AMs' impact on the users' motivation for PA and activity behavior varied between participants. The diaries showed that self-estimated physical effort varied between participants and varied for each individual over time. Additionally, participants reported different types of accomplished activities; talking walks was most frequently reported. To be meaningful, measurements need to provide the user with a reliable receipt of whether his or her current activity behavior is sufficient for reaching an activity goal. Moreover, praise when reaching a goal was described as motivating feedback. To be useful, the devices must be easy to handle. In this study, the users perceived wearables as easy to handle, whereas tablets were perceived difficult to maneuver. Users reported in the diaries that the devices had been functional 78% (58/74) of the total test days.\n\n\nCONCLUSIONS\nActivity monitors can be valuable for supporting seniors' PA. However, the potential of the solutions for a broader group of seniors can significantly be increased. Areas of improvement include reliability, usability, and content supporting effective BCTs with respect to increasing older adults' PA.", "title": "" }, { "docid": "8d197bf27af825b9972a490d3cc9934c", "text": "The past decade has witnessed an increasing adoption of cloud database technology, which provides better scalability, availability, and fault-tolerance via transparent partitioning and replication, and automatic load balancing and fail-over. However, only a small number of cloud databases provide strong consistency guarantees for distributed transactions, despite decades of research on distributed transaction processing, due to practical challenges that arise in the cloud setting, where failures are the norm, and human administration is minimal. For example, dealing with locks left by transactions initiated by failed machines, and determining a multi-programming level that avoids thrashing without under-utilizing available resources, are some of the challenges that arise when using lock-based transaction processing mechanisms in the cloud context. Even in the case of optimistic concurrency control, most proposals in the literature deal with distributed validation but still require the database to acquire locks during two-phase commit when installing updates of a single transaction on multiple machines. Very little theoretical work has been done to entirely eliminate the need for locking in distributed transactions, including locks acquired during two-phase commit. In this paper, we re-design optimistic concurrency control to eliminate any need for locking even for atomic commitment, while handling the practical issues in earlier theoretical work related to this problem. We conduct an extensive experimental study to evaluate our approach against lock-based methods under various setups and workloads, and demonstrate that our approach provides many practical advantages in the cloud context.", "title": "" }, { "docid": "b1ba519ffe5321d9ab92ebed8d9264bb", "text": "OBJECTIVES\nThe purpose of this study was to establish reference charts of fetal biometric parameters measured by 2-dimensional sonography in a large Brazilian population.\n\n\nMETHODS\nA cross-sectional retrospective study was conducted including 31,476 low-risk singleton pregnancies between 18 and 38 weeks' gestation. The following fetal parameters were measured: biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight. To assess the correlation between the fetal biometric parameters and gestational age, polynomial regression models were created, with adjustments made by the determination coefficient (R(2)).\n\n\nRESULTS\nThe means ± SDs of the biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight measurements at 18 and 38 weeks were 4.2 ± 2.34 and 9.1 ± 4.0 cm, 15.3 ± 7.56 and 32.3 ± 11.75 cm, 13.3 ± 10.42 and 33.4 ± 20.06 cm, 2.8 ± 2.17 and 7.2 ± 3.58 cm, and 256.34 ± 34.03 and 3169.55 ± 416.93 g, respectively. Strong correlations were observed between all fetal biometric parameters and gestational age, best represented by second-degree equations, with R(2) values of 0.95, 0.96, 0.95, 0.95, and 0.95 for biparietal diameter, head circumference, abdominal circumference, femur length, and estimated fetal weight.\n\n\nCONCLUSIONS\nFetal biometric parameters were determined for a large Brazilian population, and they may serve as reference values in cases with a high risk of intrauterine growth disorders.", "title": "" }, { "docid": "b1eff907bd8b227275f094d57b627ac8", "text": "BACKGROUND\nPilonidal sinus is a chronic inflammatory disorder of the intergluteal sulcus. The disorder often negatively affects patients' quality of life, and there are numerous possible methods of operative treatment for pilonidal sinus. The aim of our study was to compare the results of 3 different operative procedures (tension-free primary closure, Limberg flap, and Karydakis technique) used in the treatment of pilonidal disease.\n\n\nMETHODS\nThe study was conducted via a prospective randomized design. The patients were randomized into 3 groups via a closed envelope method. Patients were included in the study after admission to our clinic with pilonidal sinus disease and operative treatment already were planned. The 2 main outcomes of the study were early complications from the methods used and later recurrences of the disease.\n\n\nRESULTS\nA total of 150 patients were included in the study, and the groups were similar in terms of age, sex, and American Society of Anesthesiologists scores. The median follow-up time of the study was 24.2 months (range, 18.5-34.27) postsurgery. The recurrence rates were 6% for both the Limberg and Karydakis groups and 4% for the tension-free primary closure group. Therefore, there was no substantial difference in the recurrence rates.\n\n\nCONCLUSION\nThe search for an ideal treatment modality for pilonidal sinus disease is still ongoing. The main conclusion of our study is that a tension-free healing side is much more important than a midline suture line. Also, tension-free primary closure is as effective as a flap procedure, and it is also easier to perform.", "title": "" }, { "docid": "d79a1a6398e98855ddd1181c141d7b00", "text": "In this paper we describe a new binarisation method designed specifically for OCR of low quality camera images: Background Surface Thresholding or BST. This method is robust to lighting variations and produces images with very little noise and consistent stroke width. BST computes a ”surface” of background intensities at every point in the image and performs adaptive thresholding based on this result. The surface is estimated by identifying regions of lowresolution text and interpolating neighbouring background intensities into these regions. The final threshold is a combination of this surface and a global offset. According to our evaluation BST produces considerably fewer OCR errors than Niblack’s local average method while also being more runtime efficient.", "title": "" }, { "docid": "3e0d88a135e7d7daff538eea1a6f2c9d", "text": "The first step in an image retrieval pipeline consists of comparing global descriptors from a large database to find a short list of candidate matching images. The more compact the global descriptor, the faster the descriptors can be compared for matching. State-of-the-art global descriptors based on Fisher Vectors are represented with tens of thousands of floating point numbers. While there is significant work on compression of local descriptors, there is relatively little work on compression of high dimensional Fisher Vectors. We study the problem of global descriptor compression in the context of image retrieval, focusing on extremely compact binary representations: 64-1024 bits. Motivated by the remarkable success of deep neural networks in recent literature, we propose a compression scheme based on deeply stacked Restricted Boltzmann Machines (SRBM), which learn lower dimensional non-linear subspaces on which the data lie. We provide a thorough evaluation of several state-of-the-art compression schemes based on PCA, Locality Sensitive Hashing, Product Quantization and greedy bit selection, and show that the proposed compression scheme outperforms all existing schemes.", "title": "" }, { "docid": "7e26a6ccd587ae420b9d2b83f6b54350", "text": "Because of the SARS epidemic in Asia, people chose to the Internet shopping instead of going shopping on streets. In other words, SARS actually gave the Internet an opportunity to revive from its earlier bubbles. The purpose of this research is to provide managers of shopping Websites regarding consumer purchasing decisions based on the CSI (Consumer Styles Inventory) which was proposed by Sproles (1985) and Sproles & Kendall (1986). According to the CSI, one can capture the decision-making styles of online shoppers. Furthermore, this research also discusses the gender differences among online shoppers. Exploratory factor analysis (EFA) was used to understand the decision-making styles and discriminant analysis was used to distinguish the differences between female and male shoppers. Managers of Internet shopping Websites can design a proper marketing mix with the findings that there are differences in purchasing decisions between genders.", "title": "" }, { "docid": "7f49cb5934130fb04c02db03bd40e83d", "text": "BACKGROUND\nResearch literature on problematic smartphone use, or smartphone addiction, has proliferated. However, relationships with existing categories of psychopathology are not well defined. We discuss the concept of problematic smartphone use, including possible causal pathways to such use.\n\n\nMETHOD\nWe conducted a systematic review of the relationship between problematic use with psychopathology. Using scholarly bibliographic databases, we screened 117 total citations, resulting in 23 peer-reviewer papers examining statistical relations between standardized measures of problematic smartphone use/use severity and the severity of psychopathology.\n\n\nRESULTS\nMost papers examined problematic use in relation to depression, anxiety, chronic stress and/or low self-esteem. Across this literature, without statistically adjusting for other relevant variables, depression severity was consistently related to problematic smartphone use, demonstrating at least medium effect sizes. Anxiety was also consistently related to problem use, but with small effect sizes. Stress was somewhat consistently related, with small to medium effects. Self-esteem was inconsistently related, with small to medium effects when found. Statistically adjusting for other relevant variables yielded similar but somewhat smaller effects.\n\n\nLIMITATIONS\nWe only included correlational studies in our systematic review, but address the few relevant experimental studies also.\n\n\nCONCLUSIONS\nWe discuss causal explanations for relationships between problem smartphone use and psychopathology.", "title": "" }, { "docid": "a48278ee8a21a33ff87b66248c6b0b8a", "text": "We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components. The temporal dependencies within and across different tasks are encoded succinctly as recurrent connections. The dialog system responses beyond SLU component are also exploited as effective external features. We show with extensive experiments on a number of datasets that the proposed joint learning framework generates state-of-the-art results for both classification and tagging, and the contextual modeling based on recurrent and external features significantly improves the context sensitivity of SLU models.", "title": "" }, { "docid": "0f8bf207201692ad4905e28a2993ef29", "text": "Bluespec System Verilog is an EDL toolset for ASIC and FPGA design offering significantly higher productivity via a radically different approach to high-level synthesis. Many other attempts at high-level synthesis have tried to move the design language towards a more software-like specification of the behavior of the intended hardware. By means of code samples, demonstrations and measured results, we illustrate how Bluespec System Verilog, in an environment familiar to hardware designers, can significantly improve productivity without compromising generated hardware quality.", "title": "" } ]
scidocsrr
d20ee1b0987b213978540bd652324184
A Distributed Anomaly Detection System for In-Vehicle Network Using HTM
[ { "docid": "c158e9421ec0d1265bd625b629e64dc5", "text": "This paper proposes a gateway framework for in-vehicle networks (IVNs) based on the controller area network (CAN), FlexRay, and Ethernet. The proposed gateway framework is designed to be easy to reuse and verify to reduce development costs and time. The gateway framework can be configured, and its verification environment is automatically generated by a program with a dedicated graphical user interface (GUI). The gateway framework provides state-of-the-art functionalities that include parallel reprogramming, diagnostic routing, network management (NM), dynamic routing update, multiple routing configuration, and security. The proposed gateway framework was developed, and its performance was analyzed and evaluated.", "title": "" }, { "docid": "0f7f8557ffa238a529f28f9474559cc4", "text": "Fast incipient machine fault diagnosis is becoming one of the key requirements for economical and optimal process operation management. Artificial neural networks have been used to detect machine faults for a number of years and shown to be highly successful in this application area. This paper presents a novel test technique for machine fault detection and classification in electro-mechanical machinery from vibration measurements using one-class support vector machines (SVMs). In order to evaluate one-class SVMs, this paper examines the performance of the proposed method by comparing it with that of multilayer perception, one of the artificial neural network techniques, based on real benchmarking data. q 2005 Published by Elsevier Ltd.", "title": "" }, { "docid": "400dce50037a38d19a3057382d9246b5", "text": "A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.", "title": "" }, { "docid": "c3c0e14aa82b438ceb92a84bcdbed184", "text": "Advances in technology for miniature electronic military equipment and systems have led to the emergence of unmanned aerial vehicles (UAVs) as the new weapons of war and tools used in various other areas. UAVs can easily be controlled from a remote location. They are being used for critical operations, including offensive, reconnaissance, surveillance and other civilian missions. The need to secure these channels in a UAV system is one of the most important aspects of the security of this system because all information critical to the mission is sent through wireless communication channels. It is well understood that loss of control over these systems to adversaries due to lack of security is a potential threat to national security. In this paper various security threats to a UAV system is analyzed and a cyber-security threat model showing possible attack paths has been proposed. This model will help designers and users of the UAV systems to understand the threat profile of the system so as to allow them to address various system vulnerabilities, identify high priority threats, and select mitigation techniques for these threats.", "title": "" }, { "docid": "7f2acf667a66f2812023c26c4ca95cf1", "text": "Vehicle-IT convergence technology is a rapidly rising paradigm of modern vehicles, in which an electronic control unit (ECU) is used to control the vehicle electrical systems, and the controller area network (CAN), an in-vehicle network, is commonly used to construct an efficient network of ECUs. Unfortunately, security issues have not been treated properly in CAN, although CAN control messages could be life-critical. With the appearance of the connected car environment, in-vehicle networks (e.g., CAN) are now connected to external networks (e.g., 3G/4G mobile networks), enabling an adversary to perform a long-range wireless attack using CAN vulnerabilities. In this paper we show that a long-range wireless attack is physically possible using a real vehicle and malicious smartphone application in a connected car environment. We also propose a security protocol for CAN as a countermeasure designed in accordance with current CAN specifications. We evaluate the feasibility of the proposed security protocol using CANoe software and a DSP-F28335 microcontroller. Our results show that the proposed security protocol is more efficient than existing security protocols with respect to authentication delay and communication load.", "title": "" } ]
[ { "docid": "d8ebc5a68f8e3e7db1abc6a0e7b37da2", "text": "Previous research shows that interleaving rather than blocking practice of different skills (e.g. abcbcacab instead of aaabbbccc) usually improves subsequent test performance. Yet interleaving, but not blocking, ensures that practice of any particular skill is distributed, or spaced, because any two opportunities to practice the same task are not consecutive. Hence, because spaced practice typically improves test performance, the previously observed test benefits of interleaving may be due to spacing rather than interleaving per se. In the experiment reported herein, children practiced four kinds of mathematics problems in an order that was interleaved or blocked, and the degree of spacing was fixed. The interleaving of practice impaired practice session performance yet doubled scores on a test given one day later. An analysis of the errors suggested that interleaving boosted test scores by improving participants’ ability to pair each problem with the appropriate procedure. Copyright # 2009 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "0fba05a38cb601a1b08e6105e6b949c1", "text": "This paper discusses how to implement Paillier homomorphic encryption (HE) scheme in Java as an API. We first analyze existing Pailler HE libraries and discuss their limitations. We then design a comparatively accomplished and efficient Pailler HE Java library. As a proof of concept, we applied our Pailler HE library in an electronic voting system that allows the voting server to sum up the candidates' votes in the encrypted form with voters remain anonymous. Our library records an average of only 2766ms for each vote placement through HTTP POST request.", "title": "" }, { "docid": "019b9076c051d7eb3ad4aae0e018e45c", "text": "This paper investigates the possible application of reinforcement learning to Tetris. The author investigates the background of Tetris, and qualifies it in a mathematical context. The author discusses reinforcement learning, and considers historically successful applications of it. Finally the author discusses considerations surrounding implementation.", "title": "" }, { "docid": "832916685b22b536d1e8e85f0eeb0e14", "text": "People have always sought an attractive smile in harmony with an esthetic appearance. This trend is steadily growing as it influences one’s self esteem and psychological well-being.1,2 Faced with highly esthetic demanding patients, the practitioner should guarantee esthetic outcomes involving conservative procedures. This is undoubtedly challenging and often requiring a perfect multidisciplinary approach.3", "title": "" }, { "docid": "593077b1e73b42abbe35b3c4a49cfd50", "text": "In this paper, we propose a device-to-device (D2D) discovery scheme as a key enabler for a proximity-based service in the Long-Term Evolution Advanced (LTE-A) system. The proximity-based service includes a variety of services exploiting the location information of user equipment (UE), for example, the mobile social network and the mobile marketing. To realize the proximity-based service in the LTE-A system, it is necessary to design a D2D discovery scheme by which UE can discover another UE in its proximity. We design a D2D discovery scheme based on the random access procedure in the LTE-A system. The proposed random-access-based D2D discovery scheme is advantageous in that 1) the proposed scheme can be readily applied to the current LTE-A system without significant modification; 2) the proposed scheme discovers pairs of UE in a centralized manner, which enables the access or core network to centrally control the formation of D2D communication networks; and 3) the proposed scheme adaptively allocates resource blocks for the D2D discovery to prevent underutilization of radio resources. We analyze the performance of the proposed D2D discovery scheme. A closed-form formula for the performance is derived by means of the stochastic geometry-based approach. We show that the analysis results accurately match the simulation results.", "title": "" }, { "docid": "f6d08e76bfad9c4988253b643163671a", "text": "This paper proposes a technique for unwanted lane departure detection. Initially, lane boundaries are detected using a combination of the edge distribution function and a modified Hough transform. In the tracking stage, a linear-parabolic lane model is used: in the near vision field, a linear model is used to obtain robust information about lane orientation; in the far field, a quadratic function is used, so that curved parts of the road can be efficiently tracked. For lane departure detection, orientations of both lane boundaries are used to compute a lane departure measure at each frame, and an alarm is triggered when such measure exceeds a threshold. Experimental results indicate that the proposed system can fit lane boundaries in the presence of several image artifacts, such as sparse shadows, lighting changes and bad conditions of road painting, being able to detect in advance involuntary lane crossings. q 2005 Elsevier Ltd All rights reserved.", "title": "" }, { "docid": "751563e10e62d6b8c4a4db9909e92058", "text": "Summarising a high dimensional data set with a low dimension al embedding is a standard approach for exploring its structure. In this paper we provide an over view of some existing techniques for discovering such embeddings. We then introduce a novel prob abilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PC A (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the e mbedded space can easily be nonlinearised through Gaussian processes. We refer to this mod el as a Gaussian process latent variable model (GP-LVM). Through analysis of the GP-LVM objective fu nction, we relate the model to popular spectral techniques such as kernel PCA and multidim ensional scaling. We then review a practical algorithm for GP-LVMs in the context of large data sets and develop it to also handle discrete valued data and missing attributes. We demonstrat e the model on a range of real-world and artificially generated data sets.", "title": "" }, { "docid": "8cd970e1c247478f01a9fe2f62530fc4", "text": "In this paper, we propose a method for grasping unknown objects from piles or cluttered scenes, given a point cloud from a single depth camera. We introduce a shape-based method - Symmetry Height Accumulated Features (SHAF) - that reduces the scene description complexity such that the use of machine learning techniques becomes feasible. We describe the basic Height Accumulated Features and the Symmetry Features and investigate their quality using an F-score metric. We discuss the gain from Symmetry Features for grasp classification and demonstrate the expressive power of Height Accumulated Features by comparing it to a simple height based learning method. In robotic experiments of grasping single objects, we test 10 novel objects in 150 trials and show significant improvement of 34% over a state-of-the-art method, achieving a success rate of 92%. An improvement of 29% over the competitive method was achieved for a task of clearing a table with 5 to 10 objects and overall 90 trials. Furthermore we show that our approach is easily adaptable for different manipulators by running our experiments on a second platform.", "title": "" }, { "docid": "edc3562602fc9b275e18d44ea3a5d8ac", "text": "The replicase of all cells is thought to utilize two DNA polymerases for coordinated synthesis of leading and lagging strands. The DNA polymerases are held to DNA by circular sliding clamps. We demonstrate here that the E. coli DNA polymerase III holoenzyme assembles into a particle that contains three DNA polymerases. The three polymerases appear capable of simultaneous activity. Furthermore, the trimeric replicase is fully functional at a replication fork with helicase, primase, and sliding clamps; it produces slightly shorter Okazaki fragments than replisomes containing two DNA polymerases. We propose that two polymerases can function on the lagging strand and that the third DNA polymerase can act as a reserve enzyme to overcome certain types of obstacles to the replication fork.", "title": "" }, { "docid": "997adb89f1e02b66f8e3edc6f2b6aed2", "text": "Chimeric antigen receptor (CAR)-engineered T cells (CAR-T cells) have yielded unprecedented efficacy in B cell malignancies, most remarkably in anti-CD19 CAR-T cells for B cell acute lymphoblastic leukemia (B-ALL) with up to a 90% complete remission rate. However, tumor antigen escape has emerged as a main challenge for the long-term disease control of this promising immunotherapy in B cell malignancies. In addition, this success has encountered significant hurdles in translation to solid tumors, and the safety of the on-target/off-tumor recognition of normal tissues is one of the main reasons. In this mini-review, we characterize some of the mechanisms for antigen loss relapse and new strategies to address this issue. In addition, we discuss some novel CAR designs that are being considered to enhance the safety of CAR-T cell therapy in solid tumors.", "title": "" }, { "docid": "0836e5d45582b0a0eec78234776aa419", "text": "‘Description’: ‘Microsoft will accelerate your journey to cloud computing with an! agile and responsive datacenter built from your existing technology investments.’,! ‘DisplayUrl’: ‘www.microsoft.com/en-us/server-cloud/ datacenter/virtualization.aspx’,! ‘ID’: ‘a42b0908-174e-4f25-b59c-70bdf394a9da’,! ‘Title’: ‘Microsoft | Server & Cloud | Datacenter | Virtualization ...’,! ‘Url’: ‘http://www.microsoft.com/en-us/server-cloud/datacenter/ virtualization.aspx’,! ...! Data! #Topics: 228! #Candidate Labels: ~6,000! Domains: BLOGS, BOOKS, NEWS, PUBMED! Candidate labels rated by humans (0-3) ! Published by Lau et al. (2011). 4. Scoring Candidate Labels! Candidate Label: L = {w1, w2, ..., wm}! Scoring Function: Task: The aim of the task is to associate labels with automatically generated topics.", "title": "" }, { "docid": "9beeee852ce0d077720c212cf17be036", "text": "Spoofing speech detection aims to differentiate spoofing speech from natural speech. Frame-based features are usually used in most of previous works. Although multiple frames or dynamic features are used to form a super-vector to represent the temporal information, the time span covered by these features are not sufficient. Most of the systems failed to detect the non-vocoder or unit selection based spoofing attacks. In this work, we propose to use a temporal convolutional neural network (CNN) based classifier for spoofing speech detection. The temporal CNN first convolves the feature trajectories with a set of filters, then extract the maximum responses of these filters within a time window using a max-pooling layer. Due to the use of max-pooling, we can extract useful information from a long temporal span without concatenating a large number of neighbouring frames, as in feedforward deep neural network (DNN). Five types of feature are employed to access the performance of proposed classifier. Experimental results on ASVspoof 2015 corpus show that the temporal CNN based classifier is effective for synthetic speech detection. Specifically, the proposed method brings a significant performance boost for the unit selection based spoofing speech detection.", "title": "" }, { "docid": "56ed9f8a4b29653411f6ed55c68adc6f", "text": "The studying of social influence can be used to understand and solve many complicated problems in social network analysis such as predicting influential users. This paper focuses on the problem of predicting influential users on social networks. We introduce a three-level hierarchy that classifies the influence measurements. The hierarchy categorizes the influence measurements by three folds, i.e., models, types and algorithms. Using this hierarchy, we classify the existing influence measurements. We further compare them based on an empirical analysis in terms of performance, accuracy and correlation using datasets from two different social networks to investigate the feasibility of influence measurements. Our results show that predicting influential users does not only depend on the influence measurements but also on the nature of social networks. Our goal is to introduce a standardized baseline for the problem of predicting influential users on social networks.", "title": "" }, { "docid": "bcbba4f99e33ac0daea893e280068304", "text": "Arterial plasma glucose values throughout a 24-h period average approximately 90 mg/dl, with a maximal concentration usually not exceeding 165 mg/dl such as after meal ingestion1 and remaining above 55 mg/dl such as after exercise2 or a moderate fast (60 h).3 This relative stability contrasts with the situation for other substrates such as glycerol, lactate, free fatty acids, and ketone bodies whose fluctuations are much wider (Table 2.1).4 This narrow range defining normoglycemia is maintained through an intricate regulatory and counterregulatory neuro-hormonal system: A decrement in plasma glucose as little as 20 mg/dl (from 90 to 70 mg/dl) will suppress the release of insulin and will decrease glucose uptake in certain areas in the brain (e.g., hypothalamus where glucose sensors are located); this will activate the sympathetic nervous system and trigger the release of counterregulatory hormones (glucagon, catecholamines, cortisol, and growth hormone).5 All these changes will increase glucose release into plasma and decrease its removal so as to restore normoglycemia. On the other hand, a 10 mg/dl increment in plasma glucose will stimulate insulin release and suppress glucagon secretion to prevent further increments and restore normoglycemia. Glucose in plasma either comes from dietary sources or is either the result of the breakdown of glycogen in liver (glycogenolysis) or the formation of glucose in liver and kidney from other carbons compounds (precursors) such as lactate, pyruvate, amino acids, and glycerol (gluconeogenesis). In humans, glucose removed from plasma may have different fates in different tissues and under different conditions (e.g., postabsorptive vs. postprandial), but the pathways for its disposal are relatively limited. It (1) may be immediately stored as glycogen or (2) may undergo glycolysis, which can be non-oxidative producing pyruvate (which can be reduced to lactate or transaminated to form alanine) or oxidative through conversion to acetyl CoA which is further oxidized through the tricarboxylic acid cycle to form carbon dioxide and water. Non-oxidative glycolysis carbons undergo gluconeogenesis and the newly formed glucose is either stored as glycogen or released back into plasma (Fig. 2.1).", "title": "" }, { "docid": "7eed84f959268599e1b724b0752f6aa5", "text": "Using the information systems lifecycle as a unifying framework, we review online communities research and propose a sequence for incorporating success conditions during initiation and development to increase their chances of becoming a successful community, one in which members participate actively and develop lasting relationships. Online communities evolve following distinctive lifecycle stages and recommendations for success are more or less relevant depending on the developmental stage of the online community. In addition, the goal of the online community under study determines the components to include in the development of a successful online community. Online community builders and researchers will benefit from this review of the conditions that help online communities succeed.", "title": "" }, { "docid": "e298599e7dc7d2acfc5382a542322762", "text": "CONTEXT\nPedagogical practices reflect theoretical perspectives and beliefs that people hold about learning. Perspectives on learning are important because they influence almost all decisions about curriculum, teaching and assessment. Since Flexner's 1910 report on medical education, significant changes in perspective have been evident. Yet calls for major reform of medical education may require a broader conceptualisation of the educational process.\n\n\nPAST AND CURRENT PERSPECTIVES\nMedical education has emerged as a complex transformative process of socialisation into the culture and profession of medicine. Theory and research, in medical education and other fields, have contributed important understanding. Learning theories arising from behaviourist, cognitivist, humanist and social learning traditions have guided improvements in curriculum design and instruction, understanding of memory, expertise and clinical decision making, and self-directed learning approaches. Although these remain useful, additional perspectives which recognise the complexity of education that effectively fosters the development of knowledge, skills and professional identity are needed.\n\n\nFUTURE PERSPECTIVES\nSocio-cultural learning theories, particularly situated learning, and communities of practice offer a useful theoretical perspective. They view learning as intimately tied to context and occurring through participation and active engagement in the activities of the community. Legitimate peripheral participation describes learners' entry into the community. As learners gain skill, they assume more responsibility and move more centrally. The community, and the people and artefacts within it, are all resources for learning. Learning is both collective and individual. Social cognitive theory offers a complementary perspective on individual learning. Situated learning allows the incorporation of other learning perspectives and includes workplace learning and experiential learning. Viewing medical education through the lens of situated learning suggests teaching and learning approaches that maximise participation and build on community processes to enhance both collective and individual learning.", "title": "" }, { "docid": "28c19bf17c76a6517b5a7834216cd44d", "text": "The concept of augmented reality audio characterizes techniques where a real sound environment is extended with virtual auditory environments and communications scenarios. A framework is introduced for mobile augmented reality audio (MARA) based on a specific headset configuration where binaural microphone elements are integrated into stereo earphones. When microphone signals are routed directly to the earphones, a user is exposed to a pseudoacoustic representation of the real environment. Virtual sound events are then mixed with microphone signals to produce a hybrid, an augmented reality audio representation, for the user. An overview of related technology, literature, and application scenarios is provided. Listening test results with a prototype system show that the proposed system has interesting properties. For example, in some cases listeners found it very difficult to determine which sound sources in an augmented reality audio representation are real and which are virtual.", "title": "" }, { "docid": "70df4eee6d98efdbb741e125271f395c", "text": "Mobile Ad Hoc networks are autonomously self-organized networks without infrastructure support. Wireless sensor networks are appealing to researchers due to their wide range of application potential in areas such as target detection and tracking, environmental monitoring, industrial process monitoring, and tactical systems. Highly dynamic topology and bandwidth constraint in dense networks, brings the necessity to achieve an efficient medium access protocol subject to power constraints. Various MAC protocols with different objectives were proposed for wireless sensor networks. The aim of this paper is to outline the significance of various MAC protocols along with their merits and demerits.", "title": "" }, { "docid": "1c9c30e3e007c2d11c6f5ebd0092050b", "text": "Fatty acids are essential components of the dynamic lipid metabolism in cells. Fatty acids can also signal to intracellular pathways to trigger a broad range of cellular responses. Oleic acid is an abundant monounsaturated omega-9 fatty acid that impinges on different biological processes, but the mechanisms of action are not completely understood. Here, we report that oleic acid stimulates the cAMP/protein kinase A pathway and activates the SIRT1-PGC1α transcriptional complex to modulate rates of fatty acid oxidation. In skeletal muscle cells, oleic acid treatment increased intracellular levels of cyclic adenosine monophosphate (cAMP) that turned on protein kinase A activity. This resulted in SIRT1 phosphorylation at Ser-434 and elevation of its catalytic deacetylase activity. A direct SIRT1 substrate is the transcriptional coactivator peroxisome proliferator-activated receptor γ coactivator 1-α (PGC1α), which became deacetylated and hyperactive after oleic acid treatment. Importantly, oleic acid, but not other long chain fatty acids such as palmitate, increased the expression of genes linked to fatty acid oxidation pathway in a SIRT1-PGC1α-dependent mechanism. As a result, oleic acid potently accelerated rates of complete fatty acid oxidation in skeletal muscle cells. These results illustrate how a single long chain fatty acid specifically controls lipid oxidation through a signaling/transcriptional pathway. Pharmacological manipulation of this lipid signaling pathway might provide therapeutic possibilities to treat metabolic diseases associated with lipid dysregulation.", "title": "" }, { "docid": "d308f7ebd3f91c42023f4502fd23bc18", "text": "We present an approach for object segmentation in videos that combines frame-level object detection with concepts from object tracking and motion segmentation. The approach extracts temporally consistent object tubes based on an off-the-shelf detector. Besides the class label for each tube, this provides a location prior that is independent of motion. For the final video segmentation, we combine this information with motion cues. The method overcomes the typical problems of weakly supervised/unsupervised video segmentation, such as scenes with no motion, dominant camera motion, and objects that move as a unit. In contrast to most tracking methods, it provides an accurate, temporally consistent segmentation of each object. We report results on four video segmentation datasets: YouTube Objects, SegTrackv2, egoMotion, and FBMS.", "title": "" } ]
scidocsrr
f11778ec3603b0782524282af1f1ec29
Considering Race a Problem of Transfer Learning
[ { "docid": "48f784f6fe073c55efbc990b2a2257c6", "text": "Faces convey a wealth of social signals, including race, expression, identity, age and gender, all of which have attracted increasing attention from multi-disciplinary research, such as psychology, neuroscience, computer science, to name a few. Gleaned from recent advances in computer vision, computer graphics, and machine learning, computational intelligence based racial face analysis has been particularly popular due to its significant potential and broader impacts in extensive real-world applications, such as security and defense, surveillance, human computer interface (HCI), biometric-based identification, among others. These studies raise an important question: How implicit, non-declarative racial category can be conceptually modeled and quantitatively inferred from the face? Nevertheless, race classification is challenging due to its ambiguity and complexity depending on context and criteria. To address this challenge, recently, significant efforts have been reported toward race detection and categorization in the community. This survey provides a comprehensive and critical review of the state-of-the-art advances in face-race perception, principles, algorithms, and applications. We first discuss race perception problem formulation and motivation, while highlighting the conceptual potentials of racial face processing. Next, taxonomy of feature representational models, algorithms, performance and racial databases are presented with systematic discussions within the unified learning scenario. Finally, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potentially important cross-cutting themes and research directions for the issue of learning race from face.", "title": "" }, { "docid": "b4a0ab9e1d074bff67f80df57a732d8d", "text": "We study to what extend Chinese, Japanese and Korean faces can be classified and which facial attributes offer the most important cues. First, we propose a novel way of ob- taining large numbers of facial images with nationality la- bels. Then we train state-of-the-art neural networks with these labeled images. We are able to achieve an accuracy of 75.03% in the classification task, with chances being 33.33% and human accuracy 49% . Further, we train mul- tiple facial attribute classifiers to identify the most distinc- tive features for each group. We find that Chinese, Japanese and Koreans do exhibit substantial differences in certain at- tributes, such as bangs, smiling, and bushy eyebrows. Along the way, we uncover several gender-related cross-country patterns as well. Our work, which complements existing APIs such as Microsoft Cognitive Services and Face++, could find potential applications in tourism, e-commerce, social media marketing, criminal justice and even counter- terrorism.", "title": "" } ]
[ { "docid": "69f597aac301a492892354dd593a4355", "text": "The influence of user generated content on e-commerce websites and social media has been addressed in both practical and theoretical fields. Since most previous studies focus on either electronic word of mouth (eWOM) from e-commerce websites (EC-eWOM) or social media (SM-eWOM), little is known about the adoption process when consumers are presented EC-eWOM and SM-eWOM simultaneously. We focus on this problem by considering their adoption as an interactive process. It clarifies the mechanism of consumer’s adoption for those from the perspective of cognitive cost theory. A conceptual model is proposed about the relationship between the adoptions of the two types of eWOM. The empirical analysis shows that EC-eWOM’s usefulness and credibility positively influence the adoption of EC-eWOM, but negatively influence that of SM-eWOM. EC-eWOM adoption negatively impacts SM-eWOM adoption, and mediates the relationship between usefulness, credibility and SM-eWOM adoption. The moderating effects of consumers’ cognitive level and degree of involvement are also discussed. This paper further explains the adoption of the two types of eWOM based on the cognitive cost theory and enriches the theoretical research about eWOM in the context of social commerce. Implications for practice, as well as suggestions for future research, are also discussed. 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2a4f8fdee23dfb009b61899d5773206f", "text": "We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that: (i) it learns to disentangle shape from pose and lighting; (ii) using shading in the loss improves performance compared to just silhouettes; (iii) when using a standard single white light, our model outperforms state-of-the-art 2Dsupervised methods, both with and without pose supervision, thanks to exploiting shading cues; (iv) performance improves further when using multiple coloured lights, even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced by our model capture smooth surfaces and fine details better than voxel-based approaches; and (vi) our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn. P. Henderson School of Informatics, University of Edinburgh, Scotland E-mail: paul@pmh47.net V. Ferrari Google Research, Zürich, Switzerland E-mail: vittoferrari@google.com", "title": "" }, { "docid": "ee223b75a3a99f15941e4725d261355e", "text": "BACKGROUND\nIn Mexico, stunting and anemia have declined but are still high in some regions and subpopulations, whereas overweight and obesity have increased at alarming rates in all age and socioeconomic groups.\n\n\nOBJECTIVE\nThe objective was to describe the coexistence of stunting, anemia, and overweight and obesity at the national, household, and individual levels.\n\n\nDESIGN\nWe estimated national prevalences of and trends for stunting, anemia, and overweight and obesity in children aged <5 y and in school-aged children (5-11 y old) and anemia and overweight and obesity in women aged 20-49 y by using the National Health and Nutrition Surveys conducted in 1988, 1999, 2006, and 2012. With the use of the most recent data (2012), the double burden of malnutrition at the household level was estimated and defined as the coexistence of stunting in children aged <5 y and overweight or obesity in the mother. At the individual level, double burden was defined as concurrent stunting and overweight and obesity in children aged 5-11 y and concurrent anemia and overweight or obesity in children aged 5-11 y and in women. We also tested if the coexistence of the conditions corresponded to expected values, under the assumption of independent distributions of each condition.\n\n\nRESULTS\nAt the household level, the prevalence of concurrent stunting in children aged <5 y and overweight and obesity in mothers was 8.4%; at the individual level, prevalences were 1% for stunting and overweight or obesity and 2.9% for anemia and overweight or obesity in children aged 5-11 y and 7.6% for anemia and overweight or obesity in women. At the household and individual levels in children aged 5-11 y, prevalences of double burden were significantly lower than expected, whereas anemia and the prevalence of overweight or obesity in women were not different from that expected.\n\n\nCONCLUSIONS\nAlthough some prevalences of double burden were lower than expected, assuming independent distributions of the 2 conditions, the coexistence of stunting, overweight or obesity, and anemia at the national, household, and intraindividual levels in Mexico calls for policies and programs to prevent the 3 conditions.", "title": "" }, { "docid": "20a0cf9c98c80aed67e9e57718ea672b", "text": "The evolution of the Internet and its applications has led to a notable increase in concern about social networking sites (SNSs). SNSs have had global mass appeal and their often frequent use – usually by young people – has triggered worries, discussions and studies on the topic of technological and social networking addictions. In addressing this issue, we have to ask to what extent technological and social networking addictions are of the same nature as substance addictions, and whether the consequences they lead to, if any, are severe enough to merit clinical attention. We can summarize our position on the topic by saying that SNSs are primarily used to increase social capital and that there is not currently enough empirical evidence on SNSs’ addiction potential to claim that SNS addition exists. Although SNSs can provoke certain negative consequences in a subset of their users or provide a platform for the expression of preexisting conditions, this is not sufficient support for their standalone addictive power. It is necessary to distinguish between true addictive disorders, the kind that fall under the category of substance addictions, and the negative side-effects of engaging with certain appealing activities like SNSs so that we do not undermine the severity of psychiatric disorders and the experience of the individuals suffering from them. We propose that psychoeducation, viewing SNS use in context to understand their gratifications and compensatory functions and revisiting the terminology on the subject are sufficient to address the problems that emerge from SNS usage. ARTICLE HISTORY Received 17 June 2015 Revised 1 June 2016 Accepted 1 June 2016 Published online 4 July 2016", "title": "" }, { "docid": "7d2a8a4008f97738d8eacf42ea390692", "text": "Relational inference is a crucial technique for knowledge base population. The central problem in the study of relational inference is to infer unknown relations between entities from the facts given in the knowledge bases. Two popular models have been put forth recently to solve this problem, which are the latent factor models and the random-walk models, respectively. However, each of them has their pros and cons, depending on their computational efficiency and inference accuracy. In this paper, we propose a hierarchical random-walk inference algorithm for relational learning in large scale graph-structured knowledge bases, which not only maintains the computational simplicity of the random-walk models, but also provides better inference accuracy than related works. The improvements come from two basic assumptions we proposed in this paper. Firstly, we assume that although a relation between two entities is syntactically directional, the information conveyed by this relation is equally shared between the connected entities, thus all of the relations are semantically bidirectional. Secondly, we assume that the topology structures of the relation-specific subgraphs in knowledge bases can be exploited to improve the performance of the random-walk based relational inference algorithms. The proposed algorithm and ideas are validated with numerical results on experimental data sampled from practical knowledge bases, and the results are compared to state-of-the-art approaches.", "title": "" }, { "docid": "7ce1646e0fe1bd83f9feb5ec20233c93", "text": "An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design.", "title": "" }, { "docid": "d2521791d515b69d5a4a8c9ea02e3d17", "text": "In this paper, four-wheel active steering (4WAS), which can control the front wheel steering angle and rear wheel steering angle independently, has been investigated based on the analysis of deficiency of conventional four wheel steering (4WS). A model following control structure is adopted to follow the desired yaw rate and vehicle sideslip angle, which consists of feedforward and feedback controller. The feedback controller is designed based on the optimal control theory, minimizing the tracking errors between the outputs of actual vehicle model and that of linear reference model. Finally, computer simulations are performed to evaluate the proposed control system via the co-simulation of Matlab/Simulink and CarSim. Simulation results show that the designed 4WAS controller can achieve the good response performance and improve the vehicle handling and stability.", "title": "" }, { "docid": "e5d2771610e1f1d3153937b072fd8d31", "text": "The role of the gut microbiome in models of inflammatory and autoimmune disease is now well characterized. Renewed interest in the human microbiome and its metabolites, as well as notable advances in host mucosal immunology, has opened multiple avenues of research to potentially modulate inflammatory responses. The complexity and interdependence of these diet-microbe-metabolite-host interactions are rapidly being unraveled. Importantly, most of the progress in the field comes from new knowledge about the functional properties of these microorganisms in physiology and their effect in mucosal immunity and distal inflammation. This review summarizes the preclinical and clinical evidence on how dietary, probiotic, prebiotic, and microbiome based therapeutics affect our understanding of wellness and disease, particularly in autoimmunity.", "title": "" }, { "docid": "03a2b9ebdac78ca3a6c808f87f73c26b", "text": "OBJECTIVE\nPost-traumatic stress disorder (PTSD) has major public health significance. Evidence that PTSD may be associated with premature senescence (early or accelerated aging) would have major implications for quality of life and healthcare policy. We conducted a comprehensive review of published empirical studies relevant to early aging in PTSD.\n\n\nMETHOD\nOur search included the PubMed, PsycINFO, and PILOTS databases for empirical reports published since the year 2000 relevant to early senescence and PTSD, including: 1) biomarkers of senescence (leukocyte telomere length [LTL] and pro-inflammatory markers), 2) prevalence of senescence-associated medical conditions, and 3) mortality rates.\n\n\nRESULTS\nAll six studies examining LTL indicated reduced LTL in PTSD (pooled Cohen's d = 0.76). We also found consistent evidence of increased pro-inflammatory markers in PTSD (mean Cohen's ds), including C-reactive protein = 0.18, Interleukin-1 beta = 0.44, Interleukin-6 = 0.78, and tumor necrosis factor alpha = 0.81. The majority of reviewed studies also indicated increased medical comorbidity among several targeted conditions known to be associated with normal aging, including cardiovascular disease, type 2 diabetes mellitus, gastrointestinal ulcer disease, and dementia. We also found seven of 10 studies indicated PTSD to be associated with earlier mortality (average hazard ratio: 1.29).\n\n\nCONCLUSION\nIn short, evidence from multiple lines of investigation suggests that PTSD may be associated with a phenotype of accelerated senescence. Further research is critical to understand the nature of this association. There may be a need to re-conceptualize PTSD beyond the boundaries of mental illness, and instead as a full systemic disorder.", "title": "" }, { "docid": "5e64e36e76f4c0577ae3608b6e715a1f", "text": "Deep learning has recently become very popular on account of its incredible success in many complex datadriven applications, including image classification and speech recognition. The database community has worked on data-driven applications for many years, and therefore should be playing a lead role in supporting this new wave. However, databases and deep learning are different in terms of both techniques and applications. In this paper, we discuss research problems at the intersection of the two fields. In particular, we discuss possible improvements for deep learning systems from a database perspective, and analyze database applications that may benefit from deep learning techniques.", "title": "" }, { "docid": "08353c7d40a0df4909b09f2d3e5ab4fe", "text": "Object detection has made great progress in the past few years along with the development of deep learning. However, most current object detection methods are resource hungry, which hinders their wide deployment to many resource restricted usages such as usages on always-on devices, battery-powered low-end devices, etc. This paper considers the resource and accuracy trade-off for resource-restricted usages during designing the whole object detection framework. Based on the deeply supervised object detection (DSOD) framework, we propose Tiny-DSOD dedicating to resource-restricted usages. Tiny-DSOD introduces two innovative and ultra-efficient architecture blocks: depthwise dense block (DDB) based backbone and depthwise feature-pyramid-network (D-FPN) based front-end. We conduct extensive experiments on three famous benchmarks (PASCAL VOC 2007, KITTI, and COCO), and compare Tiny-DSOD to the state-of-the-art ultra-efficient object detection solutions such as Tiny-YOLO, MobileNet-SSD (v1 & v2), SqueezeDet, Pelee, etc. Results show that Tiny-DSOD outperforms these solutions in all the three metrics (parameter-size, FLOPs, accuracy) in each comparison. For instance, Tiny-DSOD achieves 72.1% mAP with only 0.95M parameters and 1.06B FLOPs, which is by far the state-of-the-arts result with such a low resource requirement.∗", "title": "" }, { "docid": "7b717d6c4506befee2a374333055e2d1", "text": "This is the pre-acceptance version, to read the final version please go to IEEE Geoscience and Remote Sensing Magazine on IEEE XPlore. Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a “black-box” solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization. X. Zhu and L. Mou are with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany and with Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), Germany, E-mails: xiao.zhu@dlr.de; lichao.mou@dlr.de. D. Tuia was with the Department of Geography, University of Zurich, Switzerland. He is now with the Laboratory of GeoInformation Science and Remote Sensing, Wageningen University of Research, the Netherlands. E-mail: devis.tuia@wur.nl. G.-S Xia and L. Zhang are with the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University. E-mail:guisong.xia@whu.edu.cn; zlp62@whu.edu.cn. F. Xu is with the Key Laboratory for Information Science of Electromagnetic Waves (MoE), Fudan Univeristy. E-mail: fengxu@fudan.edu.cn. F. Fraundorfer is with the Institute of Computer Graphics and Vision, TU Graz, Austria and with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany. E-mail: fraundorfer@icg.tugraz.at. The work of X. Zhu and L. Mou are supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No [ERC-2016-StG-714087], Acronym: So2Sat), Helmholtz Association under the framework of the Young Investigators Group “SiPEO” (VH-NG-1018, www.sipeo.bgu.tum.de) and China Scholarship Council. The work of D. Tuia is supported by the Swiss National Science Foundation (SNSF) under the project NO. PP0P2 150593. The work of G.-S. Xia and L. Zhang are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 41501462 and No. 41431175. The work of F. Xu are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 61571134. October 12, 2017 DRAFT ar X iv :1 71 0. 03 95 9v 1 [ cs .C V ] 1 1 O ct 2 01 7 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE, IN PRESS. 2", "title": "" }, { "docid": "db422d1fcb99b941a43e524f5f2897c2", "text": "AN INDIVIDUAL CORRELATION is a correlation in which the statistical object or thing described is indivisible. The correlation between color and illiteracy for persons in the United States, shown later in Table I, is an individual correlation, because the kind of thing described is an indivisible unit, a person. In an individual correlation the variables are descriptive properties of individuals, such as height, income, eye color, or race, and not descriptive statistical constants such as rates or means. In an ecological correlation the statistical object is a group of persons. The correlation between the percentage of the population which is Negro and the percentage of the population which is illiterate for the 48 states, shown later as Figure 2, is an ecological correlation. The thing described is the population of a state, and not a single individual. The variables are percentages, descriptive properties of groups, and not descriptive properties of individuals. Ecological correlations are used in an impressive number of quantitative sociological studies, some of which by now have attained the status of classics: Cowles’ ‘‘Statistical Study of Climate in Relation to Pulmonary Tuberculosis’’; Gosnell’s ‘‘Analysis of the 1932 Presidential Vote in Chicago,’’ Factorial and Correlational Analysis of the 1934 Vote in Chicago,’’ and the more elaborate factor analysis in Machine Politics; Ogburn’s ‘‘How women vote,’’ ‘‘Measurement of the Factors in the Presidential Election of 1928,’’ ‘‘Factors in the Variation of Crime Among Cities,’’ and Groves and Ogburn’s correlation analyses in American Marriage and Family Relationships; Ross’ study of school attendance in Texas; Shaw’s Delinquency Areas study of the correlates of delinquency, as well as The more recent analyses in Juvenile Delinquency in Urban Areas; Thompson’s ‘‘Some Factors Influencing the Ratios of Children to Women in American Cities, 1930’’; Whelpton’s study of the correlates of birth rates, in ‘‘Geographic and Economic Differentials in Fertility;’’ and White’s ‘‘The Relation of Felonies to Environmental Factors in Indianapolis.’’ Although these studies and scores like them depend upon ecological correlations, it is not because their authors are interested in correlations between the properties of areas as such. Even out-and-out ecologists, in studying delinquency, for example, rely primarily upon data describing individuals, not areas. In each study which uses ecological correlations, the obvious purpose is to discover something about the behavior of individuals. Ecological correlations are used simply because correlations between the properties of individuals are not available. In each instance, however, the substitution is made tacitly rather than explicitly. The purpose of this paper is to clarify the ecological correlation problem by stating, mathematically, the exact relation between ecological and individual correlations, and by showing the bearing of that relation upon the practice of using ecological correlations as substitutes for individual correlations.", "title": "" }, { "docid": "5d6bd34fb5fdb44950ec5d98e77219c3", "text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.", "title": "" }, { "docid": "b44df1268804e966734ea404b8c29360", "text": "A new night-time lane detection system and its accompanying framework are presented in this paper. The accompanying framework consists of an automated ground truth process and systematic storage of captured videos that will be used for training and testing. The proposed Advanced Lane Detector 2.0 (ALD 2.0) is an improvement over the ALD 1.0 or Layered Approach with integration of pixel remapping, outlier removal, and prediction with tracking. Additionally, a novel procedure to generate the ground truth data for lane marker locations is also proposed. The procedure consists of an original process called time slicing, which provides the user with unique visualization of the captured video and enables quick generation of ground truth information. Finally, the setup and implementation of a database hosting lane detection videos and standardized data sets for testing are also described. The ALD 2.0 is evaluated by means of the user-created annotations accompanying the videos. Finally, the planned improvements and remaining work are addressed.", "title": "" }, { "docid": "e74ef9d0ededd1bf4b7701c2b53eacab", "text": "This paper presents an outline of our work to develop a word sense disambiguation system in Malayalam. Word sense disambiguation (WSD) is a linguistically based mechanism for automatically defining the correct sense of a word in the context. WSD is a long standing problem in computational linguistics. A particular word may have different meanings in different contexts. For human beings, it is easy to extract the correct meaning by analyzing the sentences. In the area of natural language processing, we are trying to simulate all of these human capabilities with a computer system. In many natural language processing tasks such as machine translation, information retrieval etc., Word Sense Disambiguation plays an important role to improve the quality of systems.", "title": "" }, { "docid": "de6c7d12013908e27abda219326d9054", "text": "A network’s physical layer is deceptively quiet. Hub lights blink in response to network traffic, but do little to convey the range of information that the network carries. Analysis of the individual traffic flows and their content is essential to a complete understanding of network usage. Many tools let you view traffic in real time, but real-time monitoring at any level requires significant human and hardware resources, and doesn’t scale to networks larger than a single workgroup. It is generally more practical to archive all traffic and analyze subsets as necessary. This process is known as reconstructive traffic analysis, or network forensics.1 In practice, it is often limited to data collection and packetlevel inspection; however, a network forensics analysis tool (NFAT) can provide a richer view of the data collected, allowing you to inspect the traffic from further up the protocol stack.2 The IT industry’s ever-growing concern with security is the primary motivation for network forensics. A network that has been prepared for forensic analysis is easy to monitor, and security vulnerabilities and configuration problems can be conveniently identified. It also allows the best possible analysis of security violations. Most importantly, analyzing a complete record of your network traffic with the appropriate reconstructive tools provides context for other breach-related events. For example, if your analysis detects a user account and its Pretty Good Privacy (PGP, www.pgp.com/index.php) keys being compromised, good practice requires you to review all subsequent activity by that user, or involving those keys. In some industries, laws such as the Health Insurance Portability and Accountability Act (HIPAA, http://cms.hhs.gov/hipaa) regulate monitoring the flow of information. While it is often difficult to balance what is required by law and what is technically feasible, a forensic record of network traffic is a good first step. Security and legal concerns are not the only reasons to want a fuller understanding of your network traffic, however. Forensics tool users have reported many other applications. If your mail server has lost several hours’ or days’ worth of received messages and traditional backup methods have failed, you can recover the messages from the recorded traffic. Similarly, the forensics record allows unhurried analysis of anomalies such as traffic spikes or application errors that might otherwise have remained hearsay.", "title": "" }, { "docid": "69c65c1cbec5d4843797b7ba1a1551be", "text": "The role of personal data gained significance across all business domains in past decades. Despite strict legal restrictions that processing personal data is subject to, users tend to respond to the extensive collection of data by service providers with distrust. Legal battles between data subjects and processors emphasized the need of adaptations by the current law to face today’s challenges. The European Union has taken action by introducing the General Data Protection Regulation (GDPR), which was adopted in April 2016 and will inure in May 2018. The GDPR extends existing data privacy rights of EU citizens and simultaneously puts pressure on controllers and processors by defining high penalties in case of non-compliance. Uncertainties remain to which extent controllers and processors need to adjust their existing technologies in order to conform to the new law. This work designs, implements, and evaluates a privacy dashboard for data subjects intending to enable and ease the execution of data privacy rights granted by the GDPR.", "title": "" }, { "docid": "509075d64990cf7258c13dd0dfd5e282", "text": "In recent years we have seen a tremendous growth in applications of passive sensor-enabled RFID technology by researchers; however, their usability in applications such as activity recognition is limited by a key issue associated with their incapability to handle unintentional brownout events leading to missing significant sensed events such as a fall from a chair. Furthermore, due to the need to power and sample a sensor the practical operating range of passive-sensor enabled RFID tags are also limited with respect to passive RFID tags. Although using active or semi-passive tags can provide alternative solutions, they are not without the often undesirable maintenance and limited lifespan issues due to the need for batteries. In this article we propose a new hybrid powered sensor-enabled RFID tag concept which can sustain the supply voltage to the tag circuitry during brownouts and increase the operating range of the tag by combining the concepts from passive RFID tags and semipassive RFID tags, while potentially eliminating shortcomings of electric batteries. We have designed and built our concept, evaluated its desirable properties through extensive experiments and demonstrate its significance in the context of a human activity recognition application.", "title": "" }, { "docid": "31b449b209beaadbbcc36c485517c3cf", "text": "While a number of information visualization software frameworks exist, creating new visualizations, especially those that involve novel visualization metaphors, interaction techniques, data analysis strategies, and specialized rendering algorithms, is still often a difficult process. To facilitate the creation of novel visualizations we present a new software framework, behaviorism, which provides a wide range of flexibility when working with dynamic information on visual, temporal, and ontological levels, but at the same time providing appropriate abstractions which allow developers to create prototypes quickly which can then easily be turned into robust systems. The core of the framework is a set of three interconnected graphs, each with associated operators: a scene graph for high-performance 3D rendering, a data graph for different layers of semantically-linked heterogeneous data, and a timing graph for sophisticated control of scheduling, interaction, and animation. In particular, the timing graph provides a unified system to add behaviors to both data and visual elements, as well as to the behaviors themselves. To evaluate the framework we look briefly at three different projects all of which required novel visualizations in different domains, and all of which worked with dynamic data in different ways: an interactive ecological simulation, an information art installation, and an information visualization technique.", "title": "" } ]
scidocsrr
50905a794a5800f5df319f20ca3452f8
Mobile Edge Computing: Opportunities, solutions, and challenges
[ { "docid": "016a07d2ddb55149708409c4c62c67e3", "text": "Cloud computing has emerged as a computational paradigm and an alternative to the conventional computing with the aim of providing reliable, resilient infrastructure, and with high quality of services for cloud users in both academic and business environments. However, the outsourced data in the cloud and the computation results are not always trustworthy because of the lack of physical possession and control over the data for data owners as a result of using to virtualization, replication and migration techniques. Since that the security protection the threats to outsourced data have become a very challenging and potentially formidable task in cloud computing, many researchers have focused on ameliorating this problem and enabling public auditability for cloud data storage security using remote data auditing (RDA) techniques. This paper presents a comprehensive survey on the remote data storage auditing in single cloud server domain and presents taxonomy of RDA approaches. The objective of this paper is to highlight issues and challenges to current RDA protocols in the cloud and the mobile cloud computing. We discuss the thematic taxonomy of RDA based on significant parameters such as security requirements, security metrics, security level, auditing mode, and update mode. The state-of-the-art RDA approaches that have not received much coverage in the literature are also critically analyzed and classified into three groups of provable data possession, proof of retrievability, and proof of ownership to present a taxonomy. It also investigates similarities and differences in such framework and discusses open research issues as the future directions in RDA research. & 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d141c13cea52e72bb7b84d3546496afb", "text": "A number of resource-intensive applications, such as augmented reality, natural language processing, object recognition, and multimedia-based software are pushing the computational and energy boundaries of smartphones. Cloud-based services augment the resource-scare capabilities of smartphones while offloading compute-intensive methods to resource-rich cloud servers. The amalgam of cloud and mobile computing technologies has ushered the rise of Mobile Cloud Computing (MCC) paradigm which envisions operating smartphones and modern mobile devices beyond their intrinsic capabilities. System virtualization, application virtualization, and dynamic binary translation (DBT) techniques are required to address the heterogeneity of smartphone and cloud architectures. However, most of the current research work has only focused on the offloading of virtualized applications while giving limited consideration to native code offloading. Moreover, researchers have not attended to the requirements of multimedia based applications in MCC offloading frameworks. In this study, we present a survey and taxonomy of state-of-the-art MCC frameworks, DBT techniques for native offloading, and cross-platform execution techniques for multimedia based applications. We survey the MCC frameworks from the perspective of offload enabling techniques. We focus on native code offloading frameworks and analyze the DBT and emulation techniques of smartphones (ARM) on a cloud server (x86) architectures. Furthermore, we debate the open research issues and challenges to native offloading of multimedia based smartphone applications.", "title": "" }, { "docid": "55d88de1b0a5ebcf1c2909dea6072879", "text": "The unabated flurry of research activities to augment various mobile devices in terms of compute-intensive task execution by leveraging heterogeneous resources of available devices in the local vicinity has created a new research domain called mobile ad hoc cloud (MAC) or mobile cloud. It is a new type of mobile cloud computing (MCC). MAC is deemed to be a candidate blueprint for future compute-intensive applications with the aim of delivering high functionalities and rich impressive experience to mobile users. However, MAC is yet in its infancy, and a comprehensive survey of the domain is still lacking. In this paper, we survey the state-of-the-art research efforts carried out in the MAC domain. We analyze several problems inhibiting the adoption of MAC and review corresponding solutions by devising a taxonomy. Moreover, MAC roots are analyzed and taxonomized as architectural components, applications, objectives, characteristics, execution model, scheduling type, formation technologies, and node types. The similarities and differences among existing proposed solutions by highlighting the advantages and disadvantages are also investigated. We also compare the literature based on objectives. Furthermore, our study advocates that the problems stem from the intrinsic characteristics of MAC by identifying several new principles. Lastly, several open research challenges such as incentives, heterogeneity-ware task allocation, mobility, minimal data exchange, and security and privacy are presented as future research directions. Copyright © 2016 John Wiley & Sons, Ltd.", "title": "" } ]
[ { "docid": "3a47c4e3e5c98b9da1e1b73f2f6d3dc6", "text": "This paper examines a semantic approach for identity management, namely the W3C WebID, as a representation of personal information, and the WebID-TLS as a decentralized authentication protocol, allowing individuals to manage their own identities and data privacy. The paper identifies a set of important usability, privacy and security issues that needs to be addressed, and proposes an end to end authentication mechanism based on WebID, JSON Web Tokens (JWT) and the blockchain. The WebID includes a personal profile with its certificate, and the social relationship information described as the RDF-based FOAF ontology. The JWT is a standardized container format to encode personal related information in a secure way using \"claims\". The distributed, irreversible, undeletable, and immutable nature of the blockchain has appropriate attributes for distributed credential storage and decentralized identity management.", "title": "" }, { "docid": "24880289ca2b6c31810d28c8363473b3", "text": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.", "title": "" }, { "docid": "43ee3d818b528081aadf6abdc23650fa", "text": "Cloud computing has become an increasingly important research topic given the strong evolution and migration of many network services to such computational environment. The problem that arises is related with efficiency management and utilization of the large amounts of computing resources. This paper begins with a brief retrospect of traditional scheduling, followed by a detailed review of metaheuristic algorithms for solving the scheduling problems by placing them in a unified framework. Armed with these two technologies, this paper surveys the most recent literature about metaheuristic scheduling solutions for cloud. In addition to applications using metaheuristics, some important issues and open questions are presented for the reference of future researches on scheduling for cloud.", "title": "" }, { "docid": "455068ecca4db680a8cd65bf127cfc91", "text": "OBJECTIVES\nLoneliness is common among older persons and has been associated with health and mental health risks. This systematic review examines the utility of loneliness interventions among older persons.\n\n\nDATA SOURCE\nThirty-four intervention studies were used. STUDY INCLUSION CRITERIA: The study was conducted between 1996 and 2011, included a sample of older adults, implemented an intervention affecting loneliness or identified a situation that directly affected loneliness, included in its outcome measures the effects of the intervention or situation on loneliness levels or on loneliness-related measures (e.g., social interaction), and included in its analysis pretest-posttest comparisons.\n\n\nDATA EXTRACTION\nStudies were accessed using the databases PsycINFO, MEDLINE, ScienceDirect, AgeLine, PsycBOOKS, and Google Scholar for the years 1996-2011.\n\n\nDATA SYNTHESIS\nInterventions were classified based on population, format, and content and were evaluated for quality of design and efficacy.\n\n\nRESULTS\nTwelve studies were effective in reducing loneliness according to the review criteria, and 15 were evaluated as potentially effective. The findings suggest that it is possible to reduce loneliness by using educational interventions focused on social networks maintenance and enhancement.\n\n\nCONCLUSIONS\nMultiple approaches show promise, although flawed design often prevents proper evaluation of efficacy. The value of specific therapy techniques in reducing loneliness is highlighted and warrants a wider investigation. Studies of special populations, such as the cognitively impaired, are also needed.", "title": "" }, { "docid": "9955b14187e172e34f233fec70ae0a38", "text": "Neural network language models (NNLM) have become an increasingly popular choice for large vocabulary continuous speech recognition (LVCSR) tasks, due to their inherent generalisation and discriminative power. This paper present two techniques to improve performance of standard NNLMs. First, the form of NNLM is modelled by introduction an additional output layer node to model the probability mass of out-of-shortlist (OOS) words. An associated probability normalisation scheme is explicitly derived. Second, a novel NNLM adaptation method using a cascaded network is proposed. Consistent WER reductions were obtained on a state-of-the-art Arabic LVCSR task over conventional NNLMs. Further performance gains were also observed after NNLM adaptation.", "title": "" }, { "docid": "3d7fabdd5f56c683de20640abccafc44", "text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.", "title": "" }, { "docid": "be91ec9b4f017818f32af09cafbb2a9a", "text": "Brainard et al. 2 INTRODUCTION Object recognition is difficult because there is no simple relation between an object's properties and the retinal image. Where the object is located, how it is oriented, and how it is illuminated also affect the image. Moreover, the relation is under-determined: multiple physical configurations can give rise to the same retinal image. In the case of object color, the spectral power distribution of the light reflected from an object depends not only on the object's intrinsic surface reflectance but also factors extrinsic to the object, such as the illumination. The relation between intrinsic reflectance, extrinsic illumination, and the color signal reflected to the eye is shown schematically in Figure 1. The light incident on a surface is characterized by its spectral power distribution E(λ). A small surface element reflects a fraction of the incident illuminant to the eye. The surface reflectance function S(λ) specifies this fraction as a function of wavelength. The spectrum of the light reaching the eye is called the color signal and is given by C(λ) = E(λ)S(λ). Information about C(λ) is encoded by three classes of cone photoreceptors, the L-, M-, and Scones. The top two patches rendered in Plate 1 illustrate the large effect that a typical change in natural illumination (see Wyszecki and Stiles, 1982) can have on the color signal. This effect might lead us to expect that the color appearance of objects should vary radically, depending as much on the current conditions of illumination as on the object's surface reflectance. Yet the very fact that we can sensibly refer to objects as having a color indicates otherwise. Somehow our visual system stabilizes the color appearance of objects against changes in illumination, a perceptual effect that is referred to as color constancy. Because the illumination is the most salient object-extrinsic factor that affects the color signal, it is natural that emphasis has been placed on understanding how changing the illumination affects object color appearance. In a typical color constancy experiment, the independent variable is the illumination and the dependent variable is a measure of color appearance experiments employ different stimulus configurations and psychophysical tasks, but taken as a whole they support the view that human vision exhibits a reasonable degree of color constancy. Recall that the top two patches of Plate 1 illustrate the limiting case where a single surface reflectance is seen under multiple illuminations. Although this …", "title": "" }, { "docid": "abdc445e498c6d04e8f046e9c2610f9f", "text": "Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.", "title": "" }, { "docid": "e4a59205189e8cca8a1aba704460f8ec", "text": "In this paper, we compare two methods for article summarization. The first method is mainly based on term-frequency, while the second method is based on ontology. We build an ontology database for analyzing the main topics of the article. After identifying the main topics and determining their relative significance, we rank the paragraphs based on the relevance between main topics and each individual paragraph. Depending on the ranks, we choose desired proportion of paragraphs as summary. Experimental results indicate that both methods offer similar accuracy in their selections of the paragraphs.", "title": "" }, { "docid": "b0709248d08564b7d1a1f23243aa0946", "text": "TrustZone-based Real-time Kernel Protection (TZ-RKP) is a novel system that provides real-time protection of the OS kernel using the ARM TrustZone secure world. TZ-RKP is more secure than current approaches that use hypervisors to host kernel protection tools. Although hypervisors provide privilege and isolation, they face fundamental security challenges due to their growing complexity and code size. TZ-RKP puts its security monitor, which represents its entire Trusted Computing Base (TCB), in the TrustZone secure world; a safe isolated environment that is dedicated to security services. Hence, the security monitor is safe from attacks that can potentially compromise the kernel, which runs in the normal world. Using the secure world for kernel protection has been crippled by the lack of control over targets that run in the normal world. TZ-RKP solves this prominent challenge using novel techniques that deprive the normal world from the ability to control certain privileged system functions. These functions are forced to route through the secure world for inspection and approval before being executed. TZ-RKP's control of the normal world is non-bypassable. It can effectively stop attacks that aim at modifying or injecting kernel binaries. It can also stop attacks that involve modifying the system memory layout, e.g, through memory double mapping. This paper presents the implementation and evaluation of TZ-RKP, which has gone through rigorous and thorough evaluation of effectiveness and performance. It is currently deployed on the latest models of the Samsung Galaxy series smart phones and tablets, which clearly demonstrates that it is a practical real-world system.", "title": "" }, { "docid": "ae73bdfbfe949201036f00820f20a086", "text": "Increasing efficiency by improving locomotion methods is a key issue for underwater robots. Moreover, a number of different control design challenges must be solved to realize operational swimming robots for underwater tasks. This article proposes and experimentally validates a straightline-path-following controller for biologically inspired swimming snake robots. In particular, a line-of-sight (LOS) guidance law is presented, which is combined with a sinusoidal gait pattern and a directional controller that steers the robot toward and along the desired path. The performance of the path-following controller is investigated through experiments with a physical underwater snake robot for both lateral undulation and eel-like motion. In addition, fluid parameter identification is performed, and simulation results based on the identified fluid coefficients are presented to obtain a back-to-back comparison with the motion of the physical robot during the experiments. The experimental results show that the proposed control strategy successfully steers the robot toward and along the desired path for both lateral undulation and eel-like motion patterns.", "title": "" }, { "docid": "9c8f54b087d90a2bcd9e3d7db1aabd02", "text": "The \"new Dark Silicon\" model benchmarks transistor technologies at the architectural level for multi-core processors.", "title": "" }, { "docid": "48bb48f6f63e233d17441494d8b81b2a", "text": "With the proliferation of mobile computing technology, mobile learning (m-learning) will play a vital role in the rapidly growing electronic learning market. M-learning is the delivery of learning to students anytime and anywhere through the use of wireless Internet and mobile devices. However, acceptance of m-learning by individuals is critical to the successful implementation of m-learning systems. Thus, there is a need to research the factors that affect user intention to use m-learning. Based on the unified theory of acceptance and use of technology (UTAUT), which integrates elements across eight models of information technology use, this study was to investigate the determinants of m-learning acceptance and to discover if there exist either age or gender differences in the acceptance of m-learning, or both. Data collected from 330 respondents in Taiwan were tested against the research model using the structural equation modelling approach. The results indicate that performance expectancy, effort expectancy, social influence, perceived playfulness, and self-management of learning were all significant determinants of behavioural intention to use m-learning. We also found that age differences moderate the effects of effort expectancy and social influence on m-learning use intention, and that gender differences moderate the effects of social influence and self-management of learning on m-learning use intention. These findings provide several important implications for m-learning acceptance, in terms of both research and practice. British Journal of Educational Technology Vol 40 No 1 2009 92–118 doi:10.1111/j.1467-8535.2007.00809.x © 2007 The Authors. Journal compilation © 2007 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Introduction The use of information and communication technology (ICT) may improve learning, especially when coupled with more learner-centred instruction (Zhu & Kaplan, 2002). From notebook computers to wireless phones and handheld devices, the massive infusion of computing devices and rapidly improving Internet capabilities have altered the nature of higher education (Green, 2000). Mobile learning (m-learning) is the follow up of e-learning, which for its part originates from distance education. M-learning refers to the delivery of learning to students anytime and anywhere through the use of wireless Internet and mobile devices, including mobile phones, personal digital assistants (PDAs), smart phones and digital audio players. Namely, m-learning users can interact with educational resources while away from their normal place of learning— the classroom or desktop computer. The place independence of mobile devices provides several benefits for e-learning environments, such as allowing students and instructors to utilise their spare time while traveling in trains or buses to finish their homework or lesson preparation (Virvou & Alepis, 2005). If e-learning took learning away from the classroom, then m-learning is taking learning away from a fixed location (Cmuk, 2007). Motiwalla (2007) contends that learning on mobile devices will never replace classroom or other e-learning approaches. Thus, m-learning is a complementary activity to both e-learning and traditional learning. However, Motiwalla (2007) also suggests that if leveraged properly, mobile technology can complement and add value to the existing learning models, such as the social constructive theory of learning with technology (Brown & Campione, 1996) and conversation theory (Pask, 1975). Thus, some believe that m-learning is becoming progressively more significant, and that it will play a vital role in the rapidly growing e-learning market. Despite the tremendous growth and potential of the mobile devices and networks, wireless e-learning and m-learning are still in their infancy or embryonic stage (Motiwalla, 2007). While the opportunities provided by m-learning are new, there are several challenges facing m-learning, such as connectivity, small screen sizes, limited processing power and reduced input capabilities. Siau, Lim and Shen (2001) also note that mobile devices have ‘(1) small screens and small multifunction key pads; (2) less computational power, limited memory and disk capacity; (3) shorter battery life; (4) complicated text input mechanisms; (5) higher risk of data storage and transaction errors; (6) lower display resolution; (7) less surfability; (8) unfriendly user-interfaces; and (9) graphical limitations’ (p. 6). Equipped with a small phone-style keyboard or a touch screen, users might require more time to search for some information on a page than they need to read it (Motiwalla, 2007). These challenges mean that adapting existing e-learning services to m-learning is not an easy work, and that users may be inclined to not accept m-learning. Thus, the success of m-learning may depend on whether or not users are willing to adopt the new technology that is different from what they have used in the past. While e-learning and mobile commerce/learning has received extensive attention (Concannon, Flynn & Campbell, 2005; Davies & Graff, 2005; Govindasamy, 2002; Harun, 2002; Ismail, 2002; Luarn & Lin, 2005; Mwanza & Engeström, 2005; Motiwalla, 2007; Pituch & Lee, 2006; Selim, 2007; Shee & Wang, in Determinants and age and gender in mobile learning 93 © 2007 The Authors. Journal compilation © 2007 Becta. press; Ravenscroft & Matheson, 2002; Wang, 2003), thus far, little research has been conducted to investigate the factors affecting users’ intentions to adopt m-learning, and to explore the age and gender differences in terms of the acceptance of m-learning. As Pedersen and Ling (2003) suggest, even though traditional Internet services and mobile services are expected to converge into mobile Internet services, few attempts have been made to apply traditional information technology (IT) adoption models to explain their potential adoption. Consequently, the objective of this study was to investigate the determinants, as well as the age and gender differences, in the acceptance of m-learning based on the unified theory of acceptance and use of technology (UTAUT) proposed by Venkatesh, Morris, Davis and Davis (2003). The remainder of this paper is organised as follows. In the next section, we review the UTAUT and show our reasoning for adopting it as the theoretical framework of this study. This is followed by descriptions of the research model and methods. We then present the results of the data analysis and hypotheses testing. Finally, the implications and limitations of this study are discussed. Unified Theory of Acceptance and Use of Technology M-learning acceptance is the central theme of this study, and represents a fundamental managerial challenge in terms of m-learning implementation. A review of prior studies provided a theoretical foundation for hypotheses formulation. Based on eight prominent models in the field of IT acceptance research, Venkatesh et al (2003) proposed a unified model, called the unified theory of acceptance and use of technology (UTAUT), which integrates elements across the eight models. The eight models consist of the theory of reasoned action (TRA) (Fishbein & Ajzen, 1975), the technology acceptance model (TAM) (Davis, 1989), the motivational model (MM) (Davis, Bagozzi & Warshaw, 1992), the theory of planned behaviour (TPB) (Ajzen, 1991), the combined TAM and TPB (C-TAM-TPB) (Taylor & Todd, 1995a), the model of PC utilisation (MPCU) (Triandis, 1977; Thompson, Higgins & Howell, 1991), the innovation diffusion theory (IDT) (Rogers, 2003; Moore & Benbasat, 1991) and the social cognitive theory (SCT) (Bandura, 1986; Compeau & Higgins, 1995). Based on Venkatesh et al’s (2003) study, we briefly review the core constructs in each of the eight models, which have been theorised as the determinants of IT usage intention and/or behaviour. First, TRA has been considered to be one of the most fundamental and influential theories on human behaviour. Attitudes toward behaviour and subjective norms are the two core constructs in TRA. Second, TAM was originally developed to predict IT acceptance and usage on the job, and has been extensively applied to various types of technologies and users. Perceived usefulness and perceived ease of use are the two main constructs mentioned in TAM. More recently, Venkatesh and Davis (2000) presented TAM2 by adding subjective norms to the TAM in the case of mandatory settings. Third, Davis et al (1992) employed motivation theory to understand new technology acceptance and usage, focusing on the primary constructs of extrinsic motivation and intrinsic motivation. Fourth, TPB extended TRA by including the construct of perceived behavioural control, and has been successfully applied to the 94 British Journal of Educational Technology Vol 40 No 1 2009 © 2007 The Authors. Journal compilation © 2007 Becta. understanding of individual acceptance and usage of various technologies (Harrison, Mykytyn & Riemenschneider, 1997; Mathieson, 1991; Taylor & Todd, 1995b). Fifth, C-TAM-TPB is a hybrid model that combines the predictors of TPB with perceived usefulness from TAM. Sixth, based on Triandis’ (1977) theory of human behaviour, Thompson et al (1991) presented the MPCU and used this model to predict PC utilisation. MPCU consists of six constructs, including job fit, complexity, long-term consequences, affect towards use, social factors and facilitating conditions. Seventh, Moore and Benbasat (1991) adapted the properties of innovations posited by IDT and refined a set of constructs that could be used to explore individual technology acceptance. These constructs include relative advantage, ease of use, image, visibility, compatibility, results demonstrability and voluntariness of use. Finally, Compeau and Higgins (1995) applied and extended SCT to the context of computer utilisation (see also Compeau, Higgins &", "title": "" }, { "docid": "458392765ce4aa8b61eda7efd51aad8d", "text": "The goal of active learning is to minimise the cost of producing an annotated dataset, in which annotators are assumed to be perfect, i.e., they always choose the correct labels. However, in practice, annotators are not infallible, and they are likely to assign incorrect labels to some instances. Proactive learning is a generalisation of active learning that can model different kinds of annotators. Although proactive learning has been applied to certain labelling tasks, such as text classification, there is little work on its application to named entity (NE) tagging. In this paper, we propose a proactive learning method for producing NE annotated corpora, using two annotators with different levels of expertise, and who charge different amounts based on their levels of experience. To optimise both cost and annotation quality, we also propose a mechanism to present multiple sentences to annotators at each iteration. Experimental results for several corpora show that our method facilitates the construction of high-quality NE labelled datasets at minimal cost.", "title": "" }, { "docid": "63685ec8d8697d6f811f38b24c9a4e8c", "text": "Over the past decade, our group has approached interaction design from an industrial design point of view. In doing so, we focus on a branch of design called “formgiving” Whilst formgiving is somewhat of a neologism in English, many other European languages do have a separate word for form-related design, including German (Gestaltung), Danish (formgivnin), Swedish (formgivning) and Dutch (vormgeving). . Traditionally, formgiving has been concerned with such aspects of objects as form, colour, texture and material. In the context of interaction design, we have come to see formgiving as the way in which objects appeal to our senses and motor skills. In this paper, we first describe our approach to interaction design of electronic products. We start with how we have been first inspired and then disappointed by the Gibsonian perception movement [1], how we have come to see both appearance and actions as carriers of meaning, and how we see usability and aesthetics as inextricably linked. We then show a number of interaction concepts for consumer electronics with both our initial thinking and what we learnt from them. Finally, we discuss the relevance of all this for tangible interaction. We argue that, in addition to a data-centred view, it is also possible to take a perceptual-motor-centred view on tangible interaction. In this view, it is the rich opportunities for differentiation in appearance and action possibilities that make physical objects open up new avenues to meaning and aesthetics in interaction design. Whilst formgiving is somewhat of a neologism in English, many other European languages do have a separate word for form-related design, including German (Gestaltung), Danish (formgivnin), Swedish (formgivning) and Dutch (vormgeving).", "title": "" }, { "docid": "ed0465dc58b0f9c62e729fed4054bb58", "text": "In this study, an instructional design model was employed for restructuring a teacher education course with technology. The model was applied in a science education method course, which was offered in two different but consecutive semesters with a total enrollment of 111 students in the fall semester and 116 students in the spring semester. Using tools, such as multimedia authoring tools in the fall semester and modeling software in the spring semester, teacher educators designed high quality technology-infused lessons for science and, thereafter, modeled them in classroom for preservice teachers. An assessment instrument was constructed to assess preservice teachers technology competency, which was measured in terms of four aspects, namely, (a) selection of appropriate science topics to be taught with technology, (b) use of appropriate technology-supported representations and transformations for science content, (c) use of technology to support teaching strategies, and (d) integration of computer activities with appropriate inquiry-based pedagogy in the science classroom. The results of a MANOVA showed that preservice teachers in the Modeling group outperformed preservice teachers overall performance in the Multimedia group, F = 21.534, p = 0.000. More specifically, the Modeling group outperformed the Multimedia group on only two of the four aspects of technology competency, namely, use of technology to support teaching strategies and integration of computer activities with appropriate pedagogy in the classroom, F = 59.893, p = 0.000, and F = 10.943, p = 0.001 respectively. The results indicate that the task of preparing preservice teachers to become technology competent is difficult and requires many efforts for providing them with ample of 0360-1315/$ see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2004.06.002 * Tel.: +357 22 753772; fax: +357 22 377950. E-mail address: cangeli@ucy.ac.cy. 384 C. Angeli / Computers & Education 45 (2005) 383–398 opportunities during their education to develop the competencies needed to be able to teach with technology. 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ece8f2f4827decf0c440ca328ee272b4", "text": "We describe an algorithm for converting linear support vector machines and any other arbitrary hyperplane-based linear classifiers into a set of non-overlapping rules that, unlike the original classifier, can be easily interpreted by humans. Each iteration of the rule extraction algorithm is formulated as a constrained optimization problem that is computationally inexpensive to solve. We discuss various properties of the algorithm and provide proof of convergence for two different optimization criteria We demonstrate the performance and the speed of the algorithm on linear classifiers learned from real-world datasets, including a medical dataset on detection of lung cancer from medical images. The ability to convert SVM's and other \"black-box\" classifiers into a set of human-understandable rules, is critical not only for physician acceptance, but also to reducing the regulatory barrier for medical-decision support systems based on such classifiers.", "title": "" }, { "docid": "de4e2e131a0ceaa47934f4e9209b1cdd", "text": "With the popularity of mobile devices, spatial crowdsourcing is rising as a new framework that enables human workers to solve tasks in the physical world. With spatial crowdsourcing, the goal is to crowdsource a set of spatiotemporal tasks (i.e., tasks related to time and location) to a set of workers, which requires the workers to physically travel to those locations in order to perform the tasks. In this article, we focus on one class of spatial crowdsourcing, in which the workers send their locations to the server and thereafter the server assigns to every worker tasks in proximity to the worker’s location with the aim of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space, including the spatial distribution and the travel cost of the workers. MTA is based on the assumptions that all tasks are of the same type and all workers are equally qualified in performing the tasks. Meanwhile, different types of tasks may require workers with various skill sets or expertise. Subsequently, we extend MTA by taking the expertise of the workers into consideration. We refer to this problem as the maximum score assignment (MSA) problem and show its practicality and generality. Extensive experiments with various synthetic and two real-world datasets show the applicability of our proposed framework.", "title": "" }, { "docid": "434ea2b009a1479925ce20e8171aea46", "text": "Several high-voltage silicon carbide (SiC) devices have been demonstrated over the past few years, and the latest-generation devices are showing even faster switching, and greater current densities. However, there are no commercial gate drivers that are suitable for these high-voltage, high-speed devices. Consequently, there has been a great research effort into the development of gate drivers for high-voltage SiC transistors. This work presents the first detailed report on the design and testing of a high-power-density, high-speed, and high-noise-immunity gate drive for a high-current, 10 kV SiC MOSFET module.", "title": "" }, { "docid": "4fbc692a4291a92c6fa77dc78913e587", "text": "Achieving artificial visual reasoning — the ability to answer image-related questions which require a multi-step, high-level process — is an important step towards artificial general intelligence. This multi-modal task requires learning a questiondependent, structured reasoning process over images from language. Standard deep learning approaches tend to exploit biases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization approach achieves state-ofthe-art results on the CLEVR Visual Reasoning benchmark with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step process. Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively.", "title": "" } ]
scidocsrr
6fe6b923a29ee1fca1ddc14233d66bbe
Targeted Storyfying: Creating Stories About Particular Events
[ { "docid": "96540d96bf2faacd0457caed66e0db4a", "text": "naturally associate with computers. Yet over the last few years there has been a surge of research efforts concerning the combination of both subjects. This article tries to shed light on these efforts. In carrying out this program, one is handicapped by the fact that, as words, both creativity and storytelling are severely lacking in the precision one expects of words to be used for intellectual endeavor. If a speaker were to mention either word in front of an audience, each person listening would probably come up with a different mental picture of what is intended. To avoid the risks that such vagueness might lead to, an initial effort is made here to restrict the endeavor to those aspects that have been modeled computationally in some model or system. The article then proceeds to review some of the research efforts that have addressed these problems from a computational point of view.", "title": "" } ]
[ { "docid": "83d330486c50fe2ae1d6960a4933f546", "text": "In this paper, an upgraded version of vehicle tracking system is developed for inland vessels. In addition to the features available in traditional VTS (Vehicle Tracking System) for automobiles, it has the capability of remote monitoring of the vessel's motion and orientation. Furthermore, this device can detect capsize events and other accidents by motion tracking and instantly notify the authority and/or the owner with current coordinates of the vessel, which is obtained using the Global Positioning System (GPS). This can certainly boost up the rescue process and minimize losses. We have used GSM network for the communication between the device installed in the ship and the ground control. So, this can be implemented only in the inland vessels. But using iridium satellite communication instead of GSM will enable the device to be used in any sea-going ships. At last, a model of an integrated inland waterway control system (IIWCS) based on this device is discussed.", "title": "" }, { "docid": "d31ba2b9ca7f5a33619fef33ade3b75a", "text": "We present ARPKI, a public-key infrastructure that ensures that certificate-related operations, such as certificate issuance, update, revocation, and validation, are transparent and accountable. ARPKI is the first such infrastructure that systematically takes into account requirements identified by previous research. Moreover, ARPKI is co-designed with a formal model, and we verify its core security property using the Tamarin prover. We present a proof-of-concept implementation providing all features required for deployment. ARPKI efficiently handles the certification process with low overhead and without incurring additional latency to TLS.\n ARPKI offers extremely strong security guarantees, where compromising n-1 trusted signing and verifying entities is insufficient to launch an impersonation attack. Moreover, it deters misbehavior as all its operations are publicly visible.", "title": "" }, { "docid": "44368062de68f6faed57d43b8e691e35", "text": "In this paper we explore one of the key aspects in building an emotion recognition system: generating suitable feature representations. We generate feature representations from both acoustic and lexical levels. At the acoustic level, we first extract low-level features such as intensity, F0, jitter, shimmer and spectral contours etc. We then generate different acoustic feature representations based on these low-level features, including statistics over these features, a new representation derived from a set of low-level acoustic codewords, and a new representation from Gaussian Supervectors. At the lexical level, we propose a new feature representation named emotion vector (eVector). We also use the traditional Bag-of-Words (BoW) feature. We apply these feature representations for emotion recognition and compare their performance on the USC-IEMOCAP database. We also combine these different feature representations via early fusion and late fusion. Our experimental results show that late fusion of both acoustic and lexical features achieves four-class emotion recognition accuracy of 69.2%.", "title": "" }, { "docid": "1b4ece2fe2c92fa1f3c5c8d61739cbb7", "text": "Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, Nguyen et al. [37] showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions (227 &#xd7; 227) than previous generative models, and does so for all 1000 ImageNet categories. In addition, we provide a unified probabilistic interpretation of related activation maximization methods and call the general class of models Plug and Play Generative Networks. PPGNs are composed of 1) a generator network G that is capable of drawing a wide range of image types and 2) a replaceable condition network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). Our method also improves the state of the art of Multifaceted Feature Visualization [40], which generates the set of synthetic inputs that activate a neuron in order to better understand how deep neural networks operate. Finally, we show that our model performs reasonably well at the task of image inpainting. While image models are used in this paper, the approach is modality-agnostic and can be applied to many types of data.", "title": "" }, { "docid": "accbf418bb065494953e784e7c93d0e9", "text": "Spreadsheets are among the most commonly used applications for data management and analysis. Perhaps they are even among the most widely used computer applications of all kinds. However, the spreadsheet paradigm of computation still lacks sufficient analysis.\n In this paper we demonstrate that a spreadsheet can play the role of a relational database engine, without any use of macros or built-in programming languages, merely by utilizing spreadsheet formulas. We achieve that by implementing all operators of relational algebra by means of spreadsheet functions.\n Given a definition of a database in SQL, it is therefore possible to construct a spreadsheet workbook with empty worksheets for data tables and worksheets filled with formulas for queries. From then on, when the user enters, alters or deletes data in the data worksheets, the formulas in query worksheets automatically compute the actual results of the queries. Thus, the spreadsheet serves as data storage and executes SQL queries, and therefore acts as a relational database engine.\n The paper is based on Microsoft Excel (TM), but our constructions work in other spreadsheet systems, too. We present a number of performance tests conducted in the beta version of Excel 2010. Their conclusion is that the performance is sufficient for a desktop database with a couple thousand rows.", "title": "" }, { "docid": "f9c8ae3d69d4e145e9d4ad2d2c828791", "text": "Phycobiliproteins are a group of colored proteins commonly present in cyanobacteria and red algae possessing a spectrum of applications. They are extensively commercialized for fluorescent applications in clinical and immunological analysis. They are also used as a colorant, and their therapeutic value has also been categorically demonstrated. However, a comprehensive knowledge and technological base for augmenting their commercial utilities is lacking. Hence, this work is focused towards this objective by means of analyzing global patents and commercial activities with application oriented research. Strategic mining of patents was performed from global patent databases resulting in the identification of 297 patents on phycobiliproteins. The majority of the patents are from USA, Japan and Europe. Patents are grouped into fluorescent applications, general applications and production aspects of phycobiliproteins and the features of each group are discussed. Commercial and applied research activities are compared in parallel. It revealed that US patents are mostly related to fluorescent applications while Japanese are on the production, purification and application for therapeutic and diagnostic purposes. Fluorescent applications are well represented in research, patents and commercial sectors. Biomedical properties documented in research and patents are not ventured commercially. Several novel applications are reported only in patents. The paper further pinpoints the plethora of techniques used for cell breakage and for extraction and purification of phycobiliproteins. The analysis identifies the lacuna and suggests means for improvements in the application and production of phycobiliproteins.", "title": "" }, { "docid": "b21135f6c627d7dfd95ad68c9fc9cc48", "text": "New mothers can experience social exclusion, particularly during the early weeks when infants are solely dependent on their mothers. We used ethnographic methods to investigate whether technology plays a role in supporting new mothers. Our research identified two core themes: (1) the need to improve confidence as a mother; and (2) the need to be more than \\'18just' a mother. We reflect on these findings both in terms of those interested in designing applications and services for motherhood and also the wider CHI community.", "title": "" }, { "docid": "721121e1393aea483d93a0b4d7fd2543", "text": "Bitmap indexes must be compressed to reduce input/output costs and minimize CPU usage. To accelerate logical operations (AND, OR, XOR) over bitmaps, we use techniques based on run-length encoding (RLE), such as Word-Aligned Hybrid (WAH) compression. These techniques are sensitive to the order of the rows: a simple lexicographical sort can divide the index size by 9 and make indexes several times faster. We investigate row-reordering heuristics. Simply permuting the columns of the table can increase the sorting efficiency by 40%. Secondary contributions include efficient algorithms to construct and aggregate bitmaps. The effect of word length is also reviewed by constructing 16-bit, 32-bit and 64-bit indexes. Using 64-bit CPUs, we find that 64-bit indexes are slightly faster than 32-bit indexes despite being nearly twice as large.", "title": "" }, { "docid": "f689c97559cba21d270ff9769aafe5d8", "text": "Many sensor network applications require that each node’s sensor stream be annotated with its physical location in some common coordinate system. Manual measurement and configuration methods for obtaining location don’t scale and are error-prone, and equipping sensors with GPS is often expensive and does not work in indoor and urban deployments. Sensor networks can therefore benefit from a self-configuring method where nodes cooperate with each other, estimate local distances to their neighbors, and converge to a consistent coordinate assignment. This paper describes a fully decentralized algorithm called AFL (Anchor-Free Localization) where nodes start from a random initial coordinate assignment and converge to a consistent solution using only local node interactions. The key idea in AFL is fold-freedom, where nodes first configure into a topology that resembles a scaled and unfolded version of the true configuration, and then run a force-based relaxation procedure. We show using extensive simulations under a variety of network sizes, node densities, and distance estimation errors that our algorithm is superior to previously proposed methods that incrementally compute the coordinates of nodes in the network, in terms of its ability to compute correct coordinates under a wider variety of conditions and its robustness to measurement errors.", "title": "" }, { "docid": "86df4a413696826b615ddd6004189884", "text": "In this paper, we consider two important problems defined on finite metric spaces, and provide efficient new algorithms and approximation schemes for these problems on inputs given as graph shortest path metrics or high-dimensional Euclidean metrics. The first of these problems is the greedy permutation (or farthest-first traversal) of a finite metric space: a permutation of the points of the space in which each point is as far as possible from all previous points. We describe randomized algorithms to find (1 + ε)-approximate greedy permutations of any graph with n vertices and m edges in expected time O ( ε−1(m + n) log n log(n/ε) ) , and to find (1 + ε)-approximate greedy permutations of points in high-dimensional Euclidean spaces in expected time O(ε−2n1+1/(1+ε) 2+o(1)). Additionally we describe a deterministic algorithm to find exact greedy permutations of any graph with n vertices and treewidth O(1) in worst-case time O(n3/2 log n). The second of the two problems we consider is distance selection: given k ∈ q( n 2 )y , we are interested in computing the kth smallest distance in the given metric space. We show that for planar graph metrics one can approximate this distance, up to a constant factor, in near linear time.", "title": "" }, { "docid": "4f57590f8bbf00d35b86aaa1ff476fc0", "text": "Pedestrian detection has been used in applications such as car safety, video surveillance, and intelligent vehicles. In this paper, we present a pedestrian detection scheme using HOG, LUV and optical flow features with AdaBoost Decision Stump classifier. Our experiments on Caltech-USA pedestrian dataset show that the proposed scheme achieves promising results of about 16.7% log-average miss rate.", "title": "" }, { "docid": "279870c84659e0eb6668e1ec494e77c9", "text": "There is a need to move from opinion-based education to evidence-based education. Best evidence medical education (BEME) is the implementation, by teachers in their practice, of methods and approaches to education based on the best evidence available. It involves a professional judgement by the teacher about his/her teaching taking into account a number of factors-the QUESTS dimensions. The Quality of the research evidence available-how reliable is the evidence? the Utility of the evidence-can the methods be transferred and adopted without modification, the Extent of the evidence, the Strength of the evidence, the Target or outcomes measured-how valid is the evidence? and the Setting or context-how relevant is the evidence? The evidence available can be graded on each of the six dimensions. In the ideal situation the evidence is high on all six dimensions, but this is rarely found. Usually the evidence may be good in some respects, but poor in others.The teacher has to balance the different dimensions and come to a decision on a course of action based on his or her professional judgement.The QUESTS dimensions highlight a number of tensions with regard to the evidence in medical education: quality vs. relevance; quality vs. validity; and utility vs. the setting or context. The different dimensions reflect the nature of research and innovation. Best Evidence Medical Education encourages a culture or ethos in which decision making takes place in this context.", "title": "" }, { "docid": "b8efbca1cb19f077c53ce8a7471ed31e", "text": "Microblogging sites such as Twitter can play a vital role in spreading information during “natural” or man-made disasters. But the volume and velocity of tweets posted during crises today tend to be extremely high, making it hard for disaster-affected communities and professional emergency responders to process the information in a timely manner. Furthermore, posts tend to vary highly in terms of their subjects and usefulness; from messages that are entirely off-topic or personal in nature, to messages containing critical information that augments situational awareness. Finding actionable information can accelerate disaster response and alleviate both property and human losses. In this paper, we describe automatic methods for extracting information from microblog posts. Specifically, we focus on extracting valuable “information nuggets”, brief, self-contained information items relevant to disaster response. Our methods leverage machine learning methods for classifying posts and information extraction. Our results, validated over one large disaster-related dataset, reveal that a careful design can yield an effective system, paving the way for more sophisticated data analysis and visualization systems.", "title": "" }, { "docid": "9eab2aa7c4fbfadb5642b47dd08c2014", "text": "A class of matrices (H-matrices) is introduced which have the following properties. (i) They are sparse in the sense that only few data are needed for their representation. (ii) The matrix-vector multiplication is of almost linear complexity. (iii) In general, sums and products of these matrices are no longer in the same set, but their truncations to the H-matrix format are again of almost linear complexity. (iv) The same statement holds for the inverse of an H-matrix. This paper is the first of a series and is devoted to the first introduction of the H-matrix concept. Two concret formats are described. The first one is the simplest possible. Nevertheless, it allows the exact inversion of tridiagonal matrices. The second one is able to approximate discrete integral operators. AMS Subject Classifications: 65F05, 65F30, 65F50.", "title": "" }, { "docid": "6be3f84e371874e2df32de9cb1d92482", "text": "We present an accurate and efficient stereo matching method using locally shared labels, a new labeling scheme that enables spatial propagation in MRF inference using graph cuts. They give each pixel and region a set of candidate disparity labels, which are randomly initialized, spatially propagated, and refined for continuous disparity estimation. We cast the selection and propagation of locally-defined disparity labels as fusion-based energy minimization. The joint use of graph cuts and locally shared labels has advantages over previous approaches based on fusion moves or belief propagation, it produces submodular moves deriving a subproblem optimality, enables powerful randomized search, helps to find good smooth, locally planar disparity maps, which are reasonable for natural scenes, allows parallel computation of both unary and pairwise costs. Our method is evaluated using the Middlebury stereo benchmark and achieves first place in sub-pixel accuracy.", "title": "" }, { "docid": "a6acba54f34d1d101f4abb00f4fe4675", "text": "We study the potential flow of information in interaction networks, that is, networks in which the interactions between the nodes are being recorded. The central notion in our study is that of an information channel. An information channel is a sequence of interactions between nodes forming a path in the network which respects the time order. As such, an information channel represents a potential way information could have flown in the interaction network. We propose algorithms to estimate information channels of limited time span from every node to other nodes in the network. We present one exact and one more efficient approximate algorithm. Both algorithms are onepass algorithms. The approximation algorithm is based on an adaptation of the HyperLogLog sketch, which allows easily combining the sketches of individual nodes in order to get estimates of how many unique nodes can be reached from groups of nodes as well. We show how the results of our algorithm can be used to build efficient influence oracles for solving the Influence maximization problem which deals with finding top k seed nodes such that the information spread from these nodes is maximized. Experiments show that the use of information channels is an interesting data-driven and model-independent way to find top k influential nodes in interaction networks.", "title": "" }, { "docid": "b0b2e50ea9020f6dd6419fbb0520cdfd", "text": "Social interactions, such as an aggressive encounter between two conspecific males or a mating encounter between a male and a female, typically progress from an initial appetitive or motivational phase, to a final consummatory phase. This progression involves both changes in the intensity of the animals' internal state of arousal or motivation and sequential changes in their behavior. How are these internal states, and their escalating intensity, encoded in the brain? Does this escalation drive the progression from the appetitive/motivational to the consummatory phase of a social interaction and, if so, how are appropriate behaviors chosen during this progression? Recent work on social behaviors in flies and mice suggests possible ways in which changes in internal state intensity during a social encounter may be encoded and coupled to appropriate behavioral decisions at appropriate phases of the interaction. These studies may have relevance to understanding how emotion states influence cognitive behavioral decisions at higher levels of brain function.", "title": "" }, { "docid": "0b22d7f6326210f02da44b0fa686f25a", "text": "Current methods learn monolithic attribute predictors, with the assumption that a single model is sufficient to reflect human understanding of a visual attribute. However, in reality, humans vary in how they perceive the association between a named property and image content. For example, two people may have slightly different internal models for what makes a shoe look \"formal\", or they may disagree on which of two scenes looks \"more cluttered\". Rather than discount these differences as noise, we propose to learn user-specific attribute models. We adapt a generic model trained with annotations from multiple users, tailoring it to satisfy user-specific labels. Furthermore, we propose novel techniques to infer user-specific labels based on transitivity and contradictions in the user's search history. We demonstrate that adapted attributes improve accuracy over both existing monolithic models as well as models that learn from scratch with user-specific data alone. In addition, we show how adapted attributes are useful to personalize image search, whether with binary or relative attributes.", "title": "" }, { "docid": "62efd4c3e2edc5d8124d5c926484d79b", "text": "OBJECTIVE\nResearch studies show that social media may be valuable tools in the disease surveillance toolkit used for improving public health professionals' ability to detect disease outbreaks faster than traditional methods and to enhance outbreak response. A social media work group, consisting of surveillance practitioners, academic researchers, and other subject matter experts convened by the International Society for Disease Surveillance, conducted a systematic primary literature review using the PRISMA framework to identify research, published through February 2013, answering either of the following questions: Can social media be integrated into disease surveillance practice and outbreak management to support and improve public health?Can social media be used to effectively target populations, specifically vulnerable populations, to test an intervention and interact with a community to improve health outcomes?Examples of social media included are Facebook, MySpace, microblogs (e.g., Twitter), blogs, and discussion forums. For Question 1, 33 manuscripts were identified, starting in 2009 with topics on Influenza-like Illnesses (n = 15), Infectious Diseases (n = 6), Non-infectious Diseases (n = 4), Medication and Vaccines (n = 3), and Other (n = 5). For Question 2, 32 manuscripts were identified, the first in 2000 with topics on Health Risk Behaviors (n = 10), Infectious Diseases (n = 3), Non-infectious Diseases (n = 9), and Other (n = 10).\n\n\nCONCLUSIONS\nThe literature on the use of social media to support public health practice has identified many gaps and biases in current knowledge. Despite the potential for success identified in exploratory studies, there are limited studies on interventions and little use of social media in practice. However, information gleaned from the articles demonstrates the effectiveness of social media in supporting and improving public health and in identifying target populations for intervention. A primary recommendation resulting from the review is to identify opportunities that enable public health professionals to integrate social media analytics into disease surveillance and outbreak management practice.", "title": "" }, { "docid": "8a37001733b0ee384277526bd864fe04", "text": "Miscreants use DDoS botnets to attack a victim via a large number of malware-infected hosts, combining the bandwidth of the individual PCs. Such botnets have thus a high potential to render targeted services unavailable. However, the actual impact of attacks by DDoS botnets has never been evaluated. In this paper, we monitor C&C servers of 14 DirtJumper and Yoddos botnets and record the DDoS targets of these networks. We then aim to evaluate the availability of the DDoS victims, using a variety of measurements such as TCP response times and analyzing the HTTP content. We show that more than 65% of the victims are severely affected by the DDoS attacks, while also a few DDoS attacks likely failed.", "title": "" } ]
scidocsrr
84cafae58d4e9c4e246b658f99433710
Eigenvalues and eigenvectors of generalized DFT, generalized DHT, DCT-IV and DST-IV matrices
[ { "docid": "ba2d02d8c3e389b9b7659287eb406b16", "text": "We propose and consolidate a definition of the discrete fractional Fourier transform that generalizes the discrete Fourier transform (DFT) in the same sense that the continuous fractional Fourier transform generalizes the continuous ordinary Fourier transform. This definition is based on a particular set of eigenvectors of the DFT matrix, which constitutes the discrete counterpart of the set of Hermite–Gaussian functions. The definition is exactlyunitary, index additive, and reduces to the DFT for unit order. The fact that this definition satisfies all the desirable properties expected of the discrete fractional Fourier transform supports our confidence that it will be accepted as the definitive definition of this transform.", "title": "" } ]
[ { "docid": "50e9cf4ff8265ce1567a9cc82d1dc937", "text": "Thu, 06 Dec 2018 02:11:00 GMT bayesian reasoning and machine learning pdf Bayesian Reasoning and Machine Learning [David Barber] on Amazon.com. *FREE* shipping on qualifying offers. Machine learning methods extract value from vast data sets ... Thu, 06 Dec 2018 14:35:00 GMT Bayesian Reasoning and Machine Learning: David Barber ... A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of ... Sat, 08 Dec 2018 04:53:00 GMT Bayesian network Wikipedia Bayesian Reasoning and Machine Learning. The book is available in hardcopy from Cambridge University Press. The publishers have kindly agreed to allow the online ... Sun, 09 Dec 2018 20:51:00 GMT Bayesian Reasoning and Machine Learning, David Barber Machine learning (ML) is the study of algorithms and mathematical models that computer systems use to progressively improve their performance on a specific task. Mon, 10 Dec 2018 14:02:00 GMT Machine learning Wikipedia Your friends and colleagues are talking about something called \"Bayes' Theorem\" or \"Bayes' Rule\", or something called Bayesian reasoning. They sound really ... Mon, 10 Dec 2018 14:24:00 GMT Yudkowsky Bayes' Theorem NIPS 2016 Tutorial on ML Methods for Personalization with Application to Medicine. More here. UAI 2017 Tutorial on Machine Learning and Counterfactual Reasoning for ... Thu, 06 Dec 2018 15:33:00 GMT Suchi Saria – Machine Learning, Computational Health ... Gaussian Processes and Kernel Methods Gaussian processes are non-parametric distributions useful for doing Bayesian inference and learning on unknown functions. Mon, 10 Dec 2018 05:12:00 GMT Machine Learning Group Publications University of This practical introduction is geared towards scientists who wish to employ Bayesian networks for applied research using the BayesiaLab software platform. Sun, 09 Dec 2018 17:17:00 GMT Bayesian Networks & BayesiaLab: A Practical Introduction ... Automated Bitcoin Trading via Machine Learning Algorithms Isaac Madan Department of Computer Science Stanford University Stanford, CA 94305 imadan@stanford.edu Tue, 27 Nov 2018 20:01:00 GMT Automated Bitcoin Trading via Machine Learning Algorithms 2.3. Naà ̄ve Bayesian classifier. A Naà ̄ve Bayesian classifier generally seems very simple; however, it is a pioneer in most information and computational applications ... Sun, 09 Dec 2018 03:48:00 GMT Proposed efficient algorithm to filter spam using machine ... Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning) [Kevin P. Murphy, Francis Bach] on Amazon.com. *FREE* shipping on qualifying ... Sun, 01 Jul 2018 19:30:00 GMT Machine Learning: A Probabilistic Perspective (Adaptive ... So itâ€TMs pretty clear by now that statistics and machine learning arenâ€TMt very different fields. I was recently pointed to a very amusing comparison by the ... Fri, 07 Dec 2018 19:56:00 GMT Statistics vs. Machine Learning, fight! | AI and Social ... Need help with Statistics for Machine Learning? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version ... Thu, 06 Dec 2018 23:39:00 GMT Statistics for Evaluating Machine Learning Models", "title": "" }, { "docid": "26b8ec80d9fe7317e306bed3cd5c9fa4", "text": "We describe a method for disambiguating Chinese commas that is central to Chinese sentence segmentation. Chinese sentence segmentation is viewed as the detection of loosely coordinated clauses separated by commas. Trained and tested on data derived from the Chinese Treebank, our model achieves a classification accuracy of close to 90% overall, which translates to an F1 score of 70% for detecting commas that signal sentence boundaries.", "title": "" }, { "docid": "2084a38c285ebfb2d5e40e8667414d0d", "text": "Differential Evolution (DE) algorithm is a new heuristic approach mainly having three advantages; finding the true global minimum regardless of the initial parameter values, fast convergence, and using few control parameters. DE algorithm is a population based algorithm like genetic algorithms using similar operators; crossover, mutation and selection. In this work, we have compared the performance of DE algorithm to that of some other well known versions of genetic algorithms: PGA, Grefensstette, Eshelman. In simulation studies, De Jong’s test functions have been used. From the simulation results, it was observed that the convergence speed of DE is significantly better than genetic algorithms. Therefore, DE algorithm seems to be a promising approach for engineering optimization problems.", "title": "" }, { "docid": "8327cb7a8d39ce8f8f982aa38cdd517e", "text": "Although many valuable visualizations have been developed to gain insights from large data sets, selecting an appropriate visualization for a specific data set and goal remains challenging for non-experts. In this paper, we propose a novel approach for knowledge-assisted, context-aware visualization recommendation. Both semantic web data and visualization components are annotated with formalized visualization knowledge from an ontology. We present a recommendation algorithm that leverages those annotations to provide visualization components that support the users’ data and task. We successfully proved the practicability of our approach by integrating it into two research prototypes. Keywords-recommendation, visualization, ontology, mashup", "title": "" }, { "docid": "461786442ec8b8762019bb82d65491a5", "text": "Fog computing is a new paradigm providing network services such as computing, storage between the end users and cloud. The distributed and open structure are the characteristics of fog computing, which make it vulnerable and very weak to security threats. In this article, the interaction between vulnerable nodes and malicious nodes in the fog computing is investigated as a non-cooperative differential game. The complex decision making process is reviewed and analyzed. To solve the game, a fictitious play-based algorithm is which the vulnerable node and the malicious nodes reach a feedback Nash equilibrium. We attain optimal strategy of energy consumption with QoS guarantee for the system, which are conveniently operated and suitable for fog nodes. The system simulation identifies the propagation of malicious nodes. We also determine the effects of various parameters on the optimal strategy. The simulation results support a theoretical foundation to limit malicious nodes in fog computing, which can help fog service providers make the optimal dynamic strategies when different types of nodes dynamically change their strategies.", "title": "" }, { "docid": "5a525ccce94c64cd8b2d8cf9125a7802", "text": "and others at both organizations for their support and valuable input. Special thanks to Grey Advertising's Ben Arno who suggested the term brand resonance. Additional thanks to workshop participants at Duke University and Dartmouth College. MSI was established in 1961 as a not-for profit institute with the goal of bringing together business leaders and academics to create knowledge that will improve business performance. The primary mission was to provide intellectual leadership in marketing and its allied fields. Over the years, MSI's global network of scholars from leading graduate schools of management and thought leaders from sponsoring corporations has expanded to encompass multiple business functions and disciplines. Issues of key importance to business performance are identified by the Board of Trustees, which represents MSI corporations and the academic community. MSI supports studies by academics on these issues and disseminates the results through conferences and workshops, as well as through its publications series. This report, prepared with the support of MSI, is being sent to you for your information and review. It is not to be reproduced or published, in any form or by any means, electronic or mechanical, without written permission from the Institute and the author. Building a strong brand has been shown to provide numerous financial rewards to firms, and has become a top priority for many organizations. In this report, author Keller outlines the Customer-Based Brand Equity (CBBE) model to assist management in their brand-building efforts. According to the model, building a strong brand involves four steps: (1) establishing the proper brand identity, that is, establishing breadth and depth of brand awareness, (2) creating the appropriate brand meaning through strong, favorable, and unique brand associations, (3) eliciting positive, accessible brand responses, and (4) forging brand relationships with customers that are characterized by intense, active loyalty. Achieving these four steps, in turn, involves establishing six brand-building blocks—brand salience, brand performance, brand imagery, brand judgments, brand feelings, and brand resonance. The most valuable brand-building block, brand resonance, occurs when all the other brand-building blocks are established. With true brand resonance, customers express a high degree of loyalty to the brand such that they actively seek means to interact with the brand and share their experiences with others. Firms that are able to achieve brand resonance should reap a host of benefits, for example, greater price premiums and more efficient and effective marketing programs. The CBBE model provides a yardstick by …", "title": "" }, { "docid": "821b6ce6e6d51e9713bb44c4c9bf8cf0", "text": "Rapidly destructive arthritis (RDA) of the shoulder is a rare disease. Here, we report two cases, with different destruction patterns, which were most probably due to subchondral insufficiency fractures (SIFs). Case 1 involved a 77-year-old woman with right shoulder pain. Rapid destruction of both the humeral head and glenoid was seen within 1 month of the onset of shoulder pain. We diagnosed shoulder RDA and performed a hemiarthroplasty. Case 2 involved a 74-year-old woman with left shoulder pain. Humeral head collapse was seen within 5 months of pain onset, without glenoid destruction. Magnetic resonance imaging showed a bone marrow edema pattern with an associated subchondral low-intensity band, typical of SIF. Total shoulder arthroplasty was performed in this case. Shoulder RDA occurs as a result of SIF in elderly women; the progression of the joint destruction is more rapid in cases with SIFs of both the humeral head and the glenoid. Although shoulder RDA is rare, this disease should be included in the differential diagnosis of acute onset shoulder pain in elderly female patients with osteoporosis and persistent joint effusion.", "title": "" }, { "docid": "77652c8d471be4d28fb48aa5e2c3ee41", "text": "This paper is a survey and an analysis of different ways of using deep learning to generate musical content. We propose a methodology based on five dimensions: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenges - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and propose some tentative multidimensional typology which is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and used to exemplify the various choices of objective, representation, architecture, challenges and strategies. The last part of the paper includes some discussion and some prospects. This is a simplified version (weak DRM) of the book: Briot, J.-P., Hadjeres, G. and Pachet, F.-D. (2019) Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer.", "title": "" }, { "docid": "b8b4e582fbcc23a5a72cdaee1edade32", "text": "In recent years, research into the mining of user check-in behavior for point-of-interest (POI) recommendations has attracted a lot of attention. Existing studies on this topic mainly treat such recommendations in a traditional manner—that is, they treat POIs as items and check-ins as ratings. However, users usually visit a place for reasons other than to simply say that they have visited. In this article, we propose an approach referred to as Urban POI-Walk (UPOI-Walk), which takes into account a user's social-triggered intentions (SI), preference-triggered intentions (PreI), and popularity-triggered intentions (PopI), to estimate the probability of a user checking-in to a POI. The core idea of UPOI-Walk involves building a HITS-based random walk on the normalized check-in network, thus supporting the prediction of POI properties related to each user's preferences. To achieve this goal, we define several user--POI graphs to capture the key properties of the check-in behavior motivated by user intentions. In our UPOI-Walk approach, we propose a new kind of random walk model—Dynamic HITS-based Random Walk—which comprehensively considers the relevance between POIs and users from different aspects. On the basis of similitude, we make an online recommendation as to the POI the user intends to visit. To the best of our knowledge, this is the first work on urban POI recommendations that considers user check-in behavior motivated by SI, PreI, and PopI in location-based social network data. Through comprehensive experimental evaluations on two real datasets, the proposed UPOI-Walk is shown to deliver excellent performance.", "title": "" }, { "docid": "6ce429d7974c9593f4323ec306488b1f", "text": "The encoder-decoder framework for neural machine translation (NMT) has been shown effective in large data scenarios, but is much less effective for low-resource languages. We present a transfer learning method that significantly improves BLEU scores across a range of low-resource languages. Our key idea is to first train a high-resource language pair (the parent model), then transfer some of the learned parameters to the low-resource pair (the child model) to initialize and constrain training. Using our transfer learning method we improve baseline NMT models by an average of 5.6 BLEU on four low-resource language pairs. Ensembling and unknown word replacement add another 2 BLEU which brings the NMT performance on low-resource machine translation close to a strong syntax based machine translation (SBMT) system, exceeding its performance on one language pair. Additionally, using the transfer learning model for re-scoring, we can improve the SBMT system by an average of 1.3 BLEU, improving the state-of-the-art on low-resource machine translation.", "title": "" }, { "docid": "621d66aeff489c65eb9877270cb86b5f", "text": "Electronic customer relationship management (e-CRM) emerges from the Internet and Web technology to facilitate the implementation of CRM. It focuses on Internet- or Web-based interaction between companies and their customers. Above all, e-CRM enables service sectors to provide appropriate services and products to satisfy the customers so as to retain customer royalty and enhance customer profitability. This research is to explore the key research issues about e-CRM performance influence for service sectors in Taiwan. A research model is proposed based on the widely applied technology-organization-environment (TOE) framework. Survey data from the questionnaire are collected to empirically assess our research model.", "title": "" }, { "docid": "9222bd9fc9aeea6917b75bf0eb4aab63", "text": "In this paper we implemented different models to solve the review usefulness classification problem. Both feed-forward neural network and LSTM were able to beat the baseline model. Performances of the models are evaluated using 0-1 loss and F-1 scores. In general, LSTM outperformed feed-forward neural network, as we trained our own word vectors in that model, and LSTM itself was able to store more information as it processes sequence of words. Besides, we built a recommender system using the user-item-rating data to further investigate this dataset and intended to make connection with review classification. The performance of recommender system is measured by RMSE in rating predictions.", "title": "" }, { "docid": "00f2bb2dd3840379c2442c018407b1c8", "text": "BACKGROUND\nFacebook is a social networking site (SNS) for communication, entertainment and information exchange. Recent research has shown that excessive use of Facebook can result in addictive behavior in some individuals.\n\n\nAIM\nTo assess the patterns of Facebook use in post-graduate students of Yenepoya University and evaluate its association with loneliness.\n\n\nMETHODS\nA cross-sectional study was done to evaluate 100 post-graduate students of Yenepoya University using Bergen Facebook Addiction Scale (BFAS) and University of California and Los Angeles (UCLA) loneliness scale version 3. Descriptive statistics were applied. Pearson's bivariate correlation was done to see the relationship between severity of Facebook addiction and the experience of loneliness.\n\n\nRESULTS\nMore than one-fourth (26%) of the study participants had Facebook addiction and 33% had a possibility of Facebook addiction. There was a significant positive correlation between severity of Facebook addiction and extent of experience of loneliness ( r = .239, p = .017).\n\n\nCONCLUSION\nWith the rapid growth of popularity and user-base of Facebook, a significant portion of the individuals are susceptible to develop addictive behaviors related to Facebook use. Loneliness is a factor which influences addiction to Facebook.", "title": "" }, { "docid": "6d89321d33ba5d923a7f31589888f430", "text": "OBJECTIVE\nThe pain experienced by burn patients during physical therapy range of motion exercises can be extreme and can discourage patients from complying with their physical therapy. We explored the novel use of immersive virtual reality (VR) to distract patients from pain during physical therapy.\n\n\nSETTING\nThis study was conducted at the burn care unit of a regional trauma center.\n\n\nPATIENTS\nTwelve patients aged 19 to 47 years (average of 21% total body surface area burned) performed range of motion exercises of their injured extremity under an occupational therapist's direction.\n\n\nINTERVENTION\nEach patient spent 3 minutes of physical therapy with no distraction and 3 minutes of physical therapy in VR (condition order randomized and counter-balanced).\n\n\nOUTCOME MEASURES\nFive visual analogue scale pain scores for each treatment condition served as the dependent variables.\n\n\nRESULTS\nAll patients reported less pain when distracted with VR, and the magnitude of pain reduction by VR was statistically significant (e.g., time spent thinking about pain during physical therapy dropped from 60 to 14 mm on a 100-mm scale). The results of this study may be examined in more detail at www.hitL.washington.edu/projects/burn/.\n\n\nCONCLUSIONS\nResults provided preliminary evidence that VR can function as a strong nonpharmacologic pain reduction technique for adult burn patients during physical therapy and potentially for other painful procedures or pain populations.", "title": "" }, { "docid": "683fe7f0b577acca2ef3af95015a62d6", "text": "Because of its high storage density with superior scalability, low integration cost and reasonably high access speed, spin-torque transfer random access memory (STT RAM) appears to have a promising potential to replace SRAM as last-level on-chip cache (e.g., L2 or L3 cache) for microprocessors. Due to unique operational characteristics of its storage device magnetic tunneling junction (MTJ), STT RAM is inherently subject to a write latency versus read latency tradeoff that is determined by the memory cell size. This paper first quantitatively studies how different memory cell sizing may impact the overall computing system performance, and shows that different computing workloads may have conflicting expectations on memory cell sizing. Leveraging MTJ device switching characteristics, we further propose an STT RAM architecture design method that can make STT RAM cache with relatively small memory cell size perform well over a wide spectrum of computing benchmarks. This has been well demonstrated using CACTI-based memory modeling and computing system performance simulations using SimpleScalar. Moreover, we show that this design method can also reduce STT RAM cache energy consumption by up to 30% over a variety of benchmarks.", "title": "" }, { "docid": "216f97a97d240456d36ec765fd45739e", "text": "This paper explores the growing trend of using mobile technology in university classrooms, exploring the use of tablets in particular, to identify learning benefits faced by students. Students, acting on their efficacy beliefs, make decisions regarding technology’s influence in improving their education. We construct a theoretical model in which internal and external factors affect a student’s self-efficacy which in turn affects the extent of adoption of a device for educational purposes. Through qualitative survey responses of university students who were given an Apple iPad to keep for the duration of a university course we find high levels of self-efficacy leading to positive views of the technology’s learning enhancement capabilities. Student observations on the practicality of the technology, off-topic use and its effects, communication, content, and perceived market advantage of using a tablet are also explored.", "title": "" }, { "docid": "b7617b5dd2a6f392f282f6a34f5b6751", "text": "In the semiconductor market, the trend of packaging for die stacking technology moves to high density with thinner chips and higher capacity of memory devices. Moreover, the wafer sawing process is becoming more important for thin wafer, because its process speed tends to affect sawn quality and yield. ULK (Ultra low-k) device could require laser grooving application to reduce the stress during wafer sawing. Furthermore under 75um-thick thin low-k wafer is not easy to use the laser grooving application. So, UV laser dicing technology that is very useful tool for Si wafer was selected as full cut application, which has been being used on low-k wafer as laser grooving method.", "title": "" }, { "docid": "920306f59d16291d0cdf80e984a1b5de", "text": "In contrast to common smooth cables, helix cables possess a spiral circular salient with a diameter ranging from 6 to 10mm on their surface. Helix cables can effectively inhibit the windand raininduced vibrations of cables and are thus commonly used on newly built bridges. In this study, a helix cable-detecting robot is proposed to inspect the inner broken wire condition of helix cables. This robot consists of a driving trolley, as well as upper and lower supporting links. The driving trolley and supporting links were connected by fixed joints and are mounted opposite to each other along the cable. To ensure that the body of the robot is not in contact with the cable surface, a magnetic absorption unit was designed in the driving trolley. A climbing unit was placed on the body of the robot which can enable the trolley to rotate arbitrarily to adapt its water conductivity lines on cables with different screw pitches. A centrifugal speed regulation method was also proposed to ensure the safe return of the robot to the ground. Theoretical analysis and experimental results suggest that the mechanism could carry a payload of 1.5 kg and climb steadily along the helix cable at an inclined angle ranging from 30◦ to 85◦. The load-carrying ability satisfied the requirement to carry sensors or instruments such as cameras to inspect the cable.", "title": "" }, { "docid": "f36b96ef76841a018e76a3bc84072b5a", "text": "Following the recent success of word embeddings, it has been argued that there is no such thing as an ideal representation for words, as different models tend to capture divergent and often mutually incompatible aspects like semantics/syntax and similarity/relatedness. In this paper, we show that each embedding model captures more information than directly apparent. A linear transformation that adjusts the similarity order of the model without any external resource can tailor it to achieve better results in those aspects, providing a new perspective on how embeddings encode divergent linguistic information. In addition, we explore the relation between intrinsic and extrinsic evaluation, as the effect of our transformations in downstream tasks is higher for unsupervised systems than for supervised ones.", "title": "" }, { "docid": "7b8fc21d27c9eb7c8e1df46eec7d6b6d", "text": "This paper examines two methods - magnet shifting and optimizing the magnet pole arc - for reducing cogging torque in permanent magnet machines. The methods were applied to existing machine designs and their performance was calculated using finite-element analysis (FEA). Prototypes of the machine designs were constructed and experimental results obtained. It is shown that the FEA predicted the cogging torque to be nearly eliminated using the two methods. However, there was some residual cogging in the prototypes due to manufacturing difficulties. In both methods, the back electromotive force was improved by reducing harmonics while preserving the magnitude.", "title": "" } ]
scidocsrr
d276a1c808c04e21123f7d089f9a9d5d
Weaving Multi-scale Context for Single Shot Detector
[ { "docid": "8979f2a0e6db231b1363f764366e1d56", "text": "In the current object detection field, one of the fastest algorithms is the Single Shot Multi-Box Detector (SSD), which uses a single convolutional neural network to detect the object in an image. Although SSD is fast, there is a big gap compared with the state-of-the-art on mAP. In this paper, we propose a method to improve SSD algorithm to increase its classification accuracy without affecting its speed. We adopt the Inception block to replace the extra layers in SSD, and call this method Inception SSD (I-SSD). The proposed network can catch more information without increasing the complexity. In addition, we use the batch-normalization (BN) and the residual structure in our I-SSD network architecture. Besides, we propose an improved non-maximum suppression method to overcome its deficiency on the expression ability of the model. The proposed I-SSD algorithm achieves 78.6% mAP on the Pascal VOC2007 test, which outperforms SSD algorithm while maintaining its time performance. We also construct an Outdoor Object Detection (OOD) dataset to testify the effectiveness of the proposed I-SSD on the platform of unmanned vehicles.", "title": "" } ]
[ { "docid": "8c3e3a120d63cca6808fef94d2922843", "text": "Python offers basic facilities for interactive work and a comprehensive library on top of which more sophisticated systems can be built. The IPython project provides on enhanced interactive environment that includes, among other features, support for data visualization and facilities for distributed and parallel computation", "title": "" }, { "docid": "fc1b3f7da0812465b7ff57a65e36bf3c", "text": "We describe N–body networks, a neural network architecture for learning the behavior and properties of complex many body physical systems. Our specific application is to learn atomic potential energy surfaces for use in molecular dynamics simulations. Our architecture is novel in that (a) it is based on a hierarchical decomposition of the many body system into subsytems (b) the activations of the network correspond to the internal state of each subsystem (c) the “neurons” in the network are constructed explicitly so as to guarantee that each of the activations is covariant to rotations (d) the neurons operate entirely in Fourier space, and the nonlinearities are realized by tensor products followed by Clebsch–Gordan decompositions. As part of the description of our network, we give a characterization of what way the weights of the network may interact with the activations so as to ensure that the covariance property is maintained.", "title": "" }, { "docid": "c65f050e911abb4b58b4e4f9b9aec63b", "text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.", "title": "" }, { "docid": "ec377000353bce311c0887cd4edab554", "text": "This paper explains various security issues in the existing home automation systems and proposes the use of logic-based security algorithms to improve home security. This paper classifies natural access points to a home as primary and secondary access points depending on their use. Logic-based sensing is implemented by identifying normal user behavior at these access points and requesting user verification when necessary. User position is also considered when various access points changed states. Moreover, the algorithm also verifies the legitimacy of a fire alarm by measuring the change in temperature, humidity, and carbon monoxide levels, thus defending against manipulative attackers. The experiment conducted in this paper used a combination of sensors, microcontrollers, Raspberry Pi and ZigBee communication to identify user behavior at various access points and implement the logical sensing algorithm. In the experiment, the proposed logical sensing algorithm was successfully implemented for a month in a studio apartment. During the course of the experiment, the algorithm was able to detect all the state changes of the primary and secondary access points and also successfully verified user identity 55 times generating 14 warnings and 5 alarms.", "title": "" }, { "docid": "466c537fca72aaa1e9cda2dc22c0f504", "text": "This paper presents a single-phase grid-connected photovoltaic (PV) module-integrated converter (MIC) based on cascaded quasi-Z-source inverters (qZSI). In this system, each qZSI module serves as an MIC and is connected to one PV panel. Due to the cascaded structure and qZSI topology, the proposed MIC features low-voltage gain requirement, single-stage energy conversion, enhanced reliability, and good output power quality. Furthermore, the enhancement mode gallium nitride field-effect transistors (eGaN FETs) are employed in the qZSI module for efficiency improvement at higher switching frequency. It is found that the qZSI is very suitable for the application of eGaN FETs because of the shoot-through capability. Optimized module design is developed based on the derived qZSI ac equivalent model and power loss analytical model to achieve high efficiency and high power density. A design example of qZSI module is presented for a 250-W PV panel with 25-50-V output voltage. The simulation and experimental results prove the validity of the analytical models. The final module prototype design achieves up to 98.06% efficiency with 100-kHz switching frequency.", "title": "" }, { "docid": "5207b424fcaab6ed130ccf85008f1d46", "text": "We describe a component of a document analysis system for constructing ontologies for domain-specific web tables imported into Excel. This component automates extraction of the Wang Notation for the column header of a table. Using column-header specific rules for XY cutting we convert the geometric structure of the column header to a linear string denoting cell attributes and directions of cuts. The string representation is parsed by a context-free grammar and the parse tree is further processed to produce an abstract data-type representation (the Wang notation tree) of each column category. Experiments were carried out to evaluate this scheme on the original and edited column headers of Excel tables drawn from a collection of 200 used in our earlier work. The transformed headers were obtained by editing the original column headers to conform to the format targeted by our grammar. Forty-four original headers and their reformatted versions were submitted as input to our software system. Our grammar was able to parse and the extract Wang notation tree for all the edited headers, but for only four of the original headers. We suggest extensions to our table grammar that would enable processing a larger fraction of headers without manual editing.", "title": "" }, { "docid": "541545bc30c887560541ba456cdfc595", "text": "Since their inception, captchas have been widely used for preventing fraudsters from performing illicit actions. Nevertheless, economic incentives have resulted in an arms race, where fraudsters develop automated solvers and, in turn, captcha services tweak their design to break the solvers. Recent work, however, presented a generic attack that can be applied to any text-based captcha scheme. Fittingly, Google recently unveiled the latest version of reCaptcha. The goal of their new system is twofold, to minimize the effort for legitimate users, while requiring tasks that are more challenging to computers than text recognition. ReCaptcha is driven by an \"advanced risk analysis system\" that evaluates requests and selects the difficulty of the captcha that will be returned. Users may be required to click in a checkbox, or solve a challenge by identifying images with similar content. In this paper, we conduct a comprehensive study of reCaptcha, and explore how the risk analysis process is influenced by each aspect of the request. Through extensive experimentation, we identify flaws that allow adversaries to effortlessly influence the risk analysis, bypass restrictions, and deploy large-scale attacks. Subsequently, we design a novel low-cost attack that leverages deep learning technologies for the semantic annotation of images. Our system is extremely effective, automatically solving 70.78% of the image reCaptcha challenges, while requiring only 19 seconds per challenge. We also apply our attack to the Facebook image captcha and achieve an accuracy of 83.5%. Based on our experimental findings, we propose a series of safeguards and modifications for impacting the scalability and accuracy of our attacks. Overall, while our study focuses on reCaptcha, our findings have wide implications, as the semantic information conveyed via images is increasingly within the realm of automated reasoning, the future of captchas relies on the exploration of novel directions.", "title": "" }, { "docid": "8722d7864499c76f76820b5f7f0c4fc6", "text": "This paper proposes a new scientific integration of the classical and quantum fundamentals of neuropsychotherapy. The history, theory, research, and practice of neuropsychotherapy are reviewed and updated in light of the current STEM perspectives on science, technology, engineering, and mathematics. New technology is introduced to motivate more systematic research comparing the bioelectronic amplitudes of varying states of human stress, relaxation, biofeedback, creativity, and meditation. Case studies of the neuropsychotherapy of attention span, consciousness, cognition, chirality, and dissociation along with the psychodynamics of therapeutic hypnosis and chronic post-traumatic stress disorder (PTSD) are explored. Implications of neuropsychotheraputic research for investigating relationships between activity-dependent gene expression, brain plasticity, and the quantum qualia of consciousness and cognition are discussed. Symmetry in neuropsychotherapy is related to Noether’s theorem of nature’s conservation laws for a unified theory of physics, biology, and psychology on the quantum level. Neuropsychotheraputic theory, research, and practice is conceptualized as a common yardstick for integrating the fundamentals of physics, biology, and the psychology of consciousness, cognition, and behavior at the quantum level.", "title": "" }, { "docid": "dca8895967ae9b86979f428d77e84ae5", "text": "This study examined how the frequency of positive and negative emotions is related to life satisfaction across nations. Participants were 8,557 people from 46 countries who reported on their life satisfaction and frequency of positive and negative emotions. Multilevel analyses showed that across nations, the experience of positive emotions was more strongly related to life satisfaction than the absence of negative emotions. Yet, the cultural dimensions of individualism and survival/self-expression moderated these relationships. Negative emotional experiences were more negatively related to life satisfaction in individualistic than in collectivistic nations, and positive emotional experiences had a larger positive relationship with life satisfaction in nations that stress self-expression than in nations that value survival. These findings show how emotional aspects of the good life vary with national culture and how this depends on the values that characterize one's society. Although to some degree, positive and negative emotions might be universally viewed as desirable and undesirable, respectively, there appear to be clear cultural differences in how relevant such emotional experiences are to quality of life.", "title": "" }, { "docid": "23555b843d5702012f10fd467d1578df", "text": "Biofuel production from renewable sources is widely considered to be one of the most sustainable alternatives to petroleum sourced fuels and a viable means for environmental and economic sustainability. Microalgae are currently being promoted as an ideal third generation biofuel feedstock because of their rapid growth rate, CO2 fixation ability and high production capacity of lipids; they also do not compete with food or feed crops, and can be produced on non-arable land. Microalgae have broad bioenergy potential as they can be used to produce liquid transportation and heating fuels, such as biodiesel and bioethanol. In this review we present an overview about microalgae use for biodiesel and bioethanol production, including their cultivation, harvesting, and processing. The most used microalgal species for these purposes as well as the main microalgal cultivation systems (photobioreactors and open ponds) will also be discussed.", "title": "" }, { "docid": "36828667ce43ab5d489f74e112045639", "text": "Zero-shot learning has received increasing interest as a means to alleviate the often prohibitive expense of annotating training data for large scale recognition problems. These methods have achieved great success via learning intermediate semantic representations in the form of attributes and more recently, semantic word vectors. However, they have thus far been constrained to the single-label case, in contrast to the growing popularity and importance of more realistic multi-label data. In this paper, for the first time, we investigate and formalise a general framework for multi-label zero-shot learning, addressing the unique challenge therein: how to exploit multi-label correlation at test time with no training data for those classes? In particular, we propose (1) a multi-output deep regression model to project an image into a semantic word space, which explicitly exploits the correlations in the intermediate semantic layer of word vectors; (2) a novel zero-shot learning algorithm for multi-label data that exploits the unique compositionality property of semantic word vector representations; and (3) a transductive learning strategy to enable the regression model learned from seen classes to generalise well to unseen classes. Our zero-shot learning experiments on a number of standard multi-label datasets demonstrate that our method outperforms a variety of baselines.", "title": "" }, { "docid": "7aa6b9cb3a7a78ec26aff130a1c9015a", "text": "As critical infrastructures in the Internet, data centers have evolved to include hundreds of thousands of servers in a single facility to support dataand/or computing-intensive applications. For such large-scale systems, it becomes a great challenge to design an interconnection network that provides high capacity, low complexity, low latency and low power consumption. The traditional approach is to build a hierarchical packet network using switches and routers. This approach suffers from limited scalability in the aspects of power consumption, wiring and control complexity, and delay caused by multi-hop store-andforwarding. In this paper we tackle the challenge by designing a novel switch architecture that supports direct interconnection of huge number of server racks and provides switching capacity at the level of Petabit/s. Our design combines the best features of electronics and optics. Exploiting recent advances in optics, we propose to build a bufferless optical switch fabric that includes interconnected arrayed waveguide grating routers (AWGRs) and tunable wavelength converters (TWCs). The optical fabric is integrated with electronic buffering and control to perform highspeed switching with nanosecond-level reconfiguration overhead. In particular, our architecture reduces the wiring complexity from O(N) to O(sqrt(N)). We design a practical and scalable scheduling algorithm to achieve high throughput under various traffic load. We also discuss implementation issues to justify the feasibility of this design. Simulation shows that our design achieves good throughput and delay performance.", "title": "" }, { "docid": "6eed8af8f6f65583e89cdd44e8d8844b", "text": "Natural language processing (NLP), or the pragmatic research perspective of computational linguistics, has become increasingly powerful due to data availability and various techniques developed in the past decade. This increasing capability makes it possible to capture sentiments more accurately and semantics in a more nuanced way. Naturally, many applications are starting to seek improvements by adopting cutting-edge NLP techniques. Financial forecasting is no exception. As a result, articles that leverage NLP techniques to predict financial markets are fast accumulating, gradually establishing the research field of natural language based financial forecasting (NLFF), or from the application perspective, stock market prediction. This review article clarifies the scope of NLFF research by ordering and structuring techniques and applications from related work. The survey also aims to increase the understanding of progress and hotspots in NLFF, and bring about discussions across many different disciplines.", "title": "" }, { "docid": "028eb05afad2183bdf695b4268c438ed", "text": "OBJECTIVE\nChoosing an appropriate method for regression analyses of cost data is problematic because it must focus on population means while taking into account the typically skewed distribution of the data. In this paper we illustrate the use of generalised linear models for regression analysis of cost data.\n\n\nMETHODS\nWe consider generalised linear models with either an identity link function (providing additive covariate effects) or log link function (providing multiplicative effects), and with gaussian (normal), overdispersed poisson, gamma, or inverse gaussian distributions. These are applied to estimate the treatment effects in two randomised trials adjusted for baseline covariates. Criteria for choosing an appropriate model are presented.\n\n\nRESULTS\nIn both examples considered, the gaussian model fits poorly and other distributions are to be preferred. When there are variables of prognostic importance in the model, using different distributions can materially affect the estimates obtained; it may also be possible to discriminate between additive and multiplicative covariate effects.\n\n\nCONCLUSIONS\nGeneralised linear models are attractive for the regression of cost data because they provide parametric methods of analysis where a variety of non-normal distributions can be specified and the way covariates act can be altered. Unlike the use of data transformation in ordinary least-squares regression, generalised linear models make inferences about the mean cost directly.", "title": "" }, { "docid": "5626f7c767ae20c3b58d2e8fb2b93ba7", "text": "The presentation starts with a philosophical discussion about computer vision in general. The aim is to put the scope of the book into its wider context, and to emphasize why the notion of scale is crucial when dealing with measured signals, such as image data. An overview of different approaches to multi-scale representation is presented, and a number of special properties of scale-space are pointed out. Then, it is shown how a mathematical theory can be formulated for describing image structures at different scales. By starting from a set of axioms imposed on the first stages of processing, it is possible to derive a set of canonical operators, which turn out to be derivatives of Gaussian kernels at different scales. The problem of applying this theory computationally is extensively treated. A scale-space theory is formulated for discrete signals, and it demonstrated how this representation can be used as a basis for expressing a large number of visual operations. Examples are smoothed derivatives in general, as well as different types of detectors for image features, such as edges, blobs, and junctions. In fact, the resulting scheme for feature detection induced by the presented theory is very simple, both conceptually and in terms of practical implementations. Typically, an object contains structures at many different scales, but locally it is not unusual that some of these \"stand out\" and seem to be more significant than others. A problem that we give special attention to concerns how to find such locally stable scales, or rather how to generate hypotheses about interesting structures for further processing. It is shown how the scale-space theory, based on a representation called the scale-space primal sketch, allows us to extract regions of interest from an image without prior information about what the image can be expected to contain. Such regions, combined with knowledge about the scales at which they occur constitute qualitative information, which can be used for guiding and simplifying other low-level processes. Experiments on different types of real and synthetic images demonstrate how the suggested approach can be used for different visual tasks, such as image segmentation, edge detection, junction detection, and focusof-attention. This work is complemented by a mathematical treatment showing how the behaviour of different types of image structures in scalespace can be analysed theoretically.", "title": "" }, { "docid": "04f705462bdd34a8d82340fb59264a51", "text": "This paper describes EmoTweet-28, a carefully curated corpus of 15,553 tweets annotated with 28 emotion categories for the purpose of training and evaluating machine learning models for emotion classification. EmoTweet-28 is, to date, the largest tweet corpus annotated with fine-grained emotion categories. The corpus contains annotations for four facets of emotion: valence, arousal, emotion category and emotion cues. We first used small-scale content analysis to inductively identify a set of emotion categories that characterize the emotions expressed in microblog text. We then expanded the size of the corpus using crowdsourcing. The corpus encompasses a variety of examples including explicit and implicit expressions of emotions as well as tweets containing multiple emotions. EmoTweet-28 represents an important resource to advance the development and evaluation of more emotion-sensitive systems.", "title": "" }, { "docid": "261796369653e128821136f327056894", "text": "Automatic note-level transcription is considered one of the most challenging tasks in music information retrieval. The specific case of flamenco singing transcription poses a particular challenge due to its complex melodic progressions, intonation inaccuracies, the use of a high degree of ornamentation, and the presence of guitar accompaniment. In this study, we explore the limitations of existing state of the art transcription systems for the case of flamenco singing and propose a specific solution for this genre: We first extract the predominant melody and apply a novel contour filtering process to eliminate segments of the pitch contour which originate from the guitar accompaniment. We formulate a set of onset detection functions based on volume and pitch characteristics to segment the resulting vocal pitch contour into discrete note events. A quantised pitch label is assigned to each note event by combining global pitch class probabilities with local pitch contour statistics. The proposed system outperforms state of the art singing transcription systems with respect to voicing accuracy, onset detection, and overall performance when evaluated on flamenco singing datasets.", "title": "" }, { "docid": "f90a4bbfbe4c6ea98457639a65dd84af", "text": "People in different cultures have strikingly different construals of the self, of others, and of the interdependence of the 2. These construals can influence, and in many cases determine, the very nature of individual experience, including cognition, emotion, and motivation. Many Asian cultures have distinct conceptions of individuality that insist on the fundamental relatedness of individuals to each other. The emphasis is on attending to others, fitting in, and harmonious interdependence with them. American culture neither assumes nor values such an overt connectedness among individuals. In contrast, individuals seek to maintain their independence from others by attending to the self and by discovering and expressing their unique inner attributes. As proposed herein, these construals are even more powerful than previously imagined. Theories of the self from both psychology and anthropology are integrated to define in detail the difference between a construal of the self as independent and a construal of the self as interdependent. Each of these divergent construals should have a set of specific consequences for cognition, emotion, and motivation; these consequences are proposed and relevant empirical literature is reviewed. Focusing on differences in self-construals enables apparently inconsistent empirical findings to be reconciled, and raises questions about what have been thought to be culture-free aspects of cognition, emotion, and motivation.", "title": "" }, { "docid": "1c3ab8ec5a2c12ebdb333ebb0d85feaa", "text": "Recently, intuitionist theories have been effective in capturing the academic discourse about morality. Intuitionist theories, like rationalist theories, offer important but only partial understanding of moral functioning. Both can be fallacious and succumb to truthiness: the attachment to one's opinions because they \"feel right,\" potentially leading to harmful action or inaction. Both intuition and reasoning are involved in deliberation and expertise. Both are malleable from environmental and educational influence, making questions of normativity-which intuitions and reasoning skills to foster-of utmost importance. Good intuition and reasoning inform mature moral functioning, which needs to include capacities that promote sustainable human well-being. Individual capacities for habituated empathic concern and moral metacognition-moral locus of control, moral self-regulation, and moral self-reflection-comprise mature moral functioning, which also requires collective capacities for moral dialogue and moral institutions. These capacities underlie moral innovation and are necessary for solving the complex challenges humanity faces.", "title": "" }, { "docid": "80b041b8712436474a200c5b5ed3aeb2", "text": "Building a spatially consistent model is a key functionality to endow a mobile robot with autonomy. Without an initial map or an absolute localization means, it requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides data from which stable features can be extracted and matched as the robot moves. But it does not directly provide 3D information, which is a difficulty for estimating the geometry of the environment. This article presents two approaches to the SLAM problem using vision: one with stereovision, and one with monocular images. Both approaches rely on a robust interest point matching algorithm that works in very diverse environments. The stereovision based approach is a classic SLAM implementation, whereas the monocular approach introduces a new way to initialize landmarks. Both approaches are analyzed and compared with extensive experimental results, with a rover and a blimp.", "title": "" } ]
scidocsrr
948ced35e7164c1092d9069e0b3efa85
Life cycle assessment of building materials : Comparative analysis of energy and environmental impacts and evaluation of the eco-ef fi ciency improvement potential
[ { "docid": "85d4ac147a4517092b9f81f89af8b875", "text": "This article is an update of an article five of us published in 1992. The areas of Multiple Criteria Decision Making (MCDM) and Multiattribute Utility Theory (MAUT) continue to be active areas of management science research and application. This paper extends the history of these areas and discusses topics we believe to be important for the future of these fields. as well as two anonymous reviewers for valuable comments.", "title": "" } ]
[ { "docid": "4cb49a91b5a30909c99138a8e36badcd", "text": "The main goal of Business Process Management (BPM) is conceptualising, operationalizing and controlling workflows in organisations based on process models. In this paper we discuss several limitations of the workflow paradigm and suggest that process models can also play an important role in analysing how organisations think about themselves through storytelling. We contrast the workflow paradigm with storytelling through a comparative analysis. We also report a case study where storytelling has been used to elicit and document the practices of an IT maintenance team. This research contributes towards the development of better process modelling languages and tools.", "title": "" }, { "docid": "ae3e9bf485d4945af625fca31eaedb76", "text": "This document describes concisely the ubiquitous class of exponential family distributions met in statistics. The first part recalls definitions and summarizes main properties and duality with Bregman divergences (all proofs are skipped). The second part lists decompositions and related formula of common exponential family distributions. We recall the Fisher-Rao-Riemannian geometries and the dual affine connection information geometries of statistical manifolds. It is intended to maintain and update this document and catalog by adding new distribution items. See the jMEF library, a Java package for processing mixture of exponential families. Available for download at http://www.lix.polytechnique.fr/~nielsen/MEF/ École Polytechnique (France) and Sony Computer Science Laboratories Inc. (Japan). École Polytechnique (France).", "title": "" }, { "docid": "c6e0843498747096ebdafd51d4b5cca6", "text": "The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.", "title": "" }, { "docid": "bde4436370b1d5e1423d1b9c710a47ad", "text": "This paper provides a review of the literature addressing sensorless operation methods of PM brushless machines. The methods explained are state-of-the-art of open and closed loop control strategies. The closed loop review includes those methods based on voltage and current measurements, those methods based on back emf measurements, and those methods based on novel techniques not included in the previous categories. The paper concludes with a comparison table including all main features for all control strategies", "title": "" }, { "docid": "525a819d97e84862d4190b1e0aa4acc0", "text": "HELIOS2014 is a 2D soccer simulation team which has been participating in the RoboCup competition since 2000. We recently focus on an online multiagent planning using tree search methodology. This paper describes the overview of our search framework and an evaluation method to select the best action sequence.", "title": "" }, { "docid": "71e6994bf56ed193a3a04728c7022a45", "text": "To evaluate timing and duration differences in airway protection and esophageal opening after oral intubation and mechanical ventilation for acute respiratory distress syndrome (ARDS) survivors versus age-matched healthy volunteers. Orally intubated adult (≥ 18 years old) patients receiving mechanical ventilation for ARDS were evaluated for swallowing impairments via a videofluoroscopic swallow study (VFSS) during usual care. Exclusion criteria were tracheostomy, neurological impairment, and head and neck cancer. Previously recruited healthy volunteers (n = 56) served as age-matched controls. All subjects were evaluated using 5-ml thin liquid barium boluses. VFSS recordings were reviewed frame-by-frame for the onsets of 9 pharyngeal and laryngeal events during swallowing. Eleven patients met inclusion criteria, with a median (interquartile range [IQR]) intubation duration of 14 (9, 16) days, and VFSSs completed a median of 5 (4, 13) days post-extubation. After arrival of the bolus in the pharynx, ARDS patients achieved maximum laryngeal closure a median (IQR) of 184 (158, 351) ms later than age-matched, healthy volunteers (p < 0.001) and it took longer to achieve laryngeal closure with a median (IQR) difference of 151 (103, 217) ms (p < 0.001), although there was no significant difference in duration of laryngeal closure. Pharyngoesophageal segment opening was a median (IQR) of − 116 (− 183, 1) ms (p = 0.004) shorter than in age-matched, healthy controls. Evaluation of swallowing physiology after oral endotracheal intubation in ARDS patients demonstrates slowed pharyngeal and laryngeal swallowing timing, suggesting swallow-related muscle weakness. These findings may highlight specific areas for further evaluation and potential therapeutic intervention to reduce post-extubation aspiration.", "title": "" }, { "docid": "9fba167ef82aa8c153986ea498683ff6", "text": "Purpose – The purpose of this conceptual paper is to identify important elements of brand building based on a literature review and case studies of successful brands in India. Design/methodology/approach – This paper is based on a review of the literature and takes a case study approach. The paper suggests the framework for building brand identity in sequential order, namely, positioning the brand, communicating the brand message, delivering the brand performance, and leveraging the brand equity. Findings – Brand-building effort has to be aligned with organizational processes that help deliver the promises to customers through all company departments, intermediaries, suppliers, etc., as all these play an important role in the experience customers have with the brand. Originality/value – The paper uses case studies of leading Indian brands to illustrate the importance of action elements in building brands in competitive markets.", "title": "" }, { "docid": "80ee585d49685a24a2011a1ddc27bb55", "text": "A developmental model of antisocial behavior is outlined. Recent findings are reviewed that concern the etiology and course of antisocial behavior from early childhood through adolescence. Evidence is presented in support of the hypothesis that the route to chronic delinquency is marked by a reliable developmental sequence of experiences. As a first step, ineffective parenting practices are viewed as determinants for childhood conduct disorders. The general model also takes into account the contextual variables that influence the family interaction process. As a second step, the conduct-disordered behaviors lead to academic failure and peer rejection. These dual failures lead, in turn, to increased risk for depressed mood and involvement in a deviant peer group. This third step usually occurs during later childhood and early adolescence. It is assumed that children following this developmental sequence are at high risk for engaging in chronic delinquent behavior. Finally, implications for prevention and intervention are discussed.", "title": "" }, { "docid": "37af8daa32affcdedb0b4820651a0b62", "text": "Bag of words (BoW) model, which was originally used for document processing field, has been introduced to computer vision field recently and used in object recognition successfully. However, in face recognition, the order less collection of local patches in BoW model cannot provide strong distinctive information since the objects (face images) belong to the same category. A new framework for extracting facial features based on BoW model is proposed in this paper, which can maintain holistic spatial information. Experimental results show that the improved method can obtain better face recognition performance on face images of AR database with extreme expressions, variant illuminations, and partial occlusions.", "title": "" }, { "docid": "833ec45dfe660377eb7367e179070322", "text": "It was predicted that high self-esteem Ss (HSEs) would rationalize an esteem-threatening decision less than low self-esteem Ss (LSEs), because HSEs presumably had more favorable self-concepts with which to affirm, and thus repair, their overall sense of self-integrity. This prediction was supported in 2 experiments within the \"free-choice\" dissonance paradigm--one that manipulated self-esteem through personality feedback and the other that varied it through selection of HSEs and LSEs, but only when Ss were made to focus on their self-concepts. A 3rd experiment countered an alternative explanation of the results in terms of mood effects that may have accompanied the experimental manipulations. The results were discussed in terms of the following: (a) their support for a resources theory of individual differences in resilience to self-image threats--an extension of self-affirmation theory, (b) their implications for self-esteem functioning, and (c) their implications for the continuing debate over self-enhancement versus self-consistency motivation.", "title": "" }, { "docid": "10e6b505ba74b1c8aea1417a4eb36c30", "text": "This meta-analysis summarizes teaching effectiveness studies of the past decade and investigates the role of theory and research design in disentangling results. Compared to past analyses based on the process–product model, a framework based on cognitive models of teaching and learning proved useful in analyzing studies and accounting for variations in effect sizes. Although the effects of teaching on student learning were diverse and complex, they were fairly systematic. The authors found the largest effects for domainspecific components of teaching—teaching most proximal to executive processes of learning. By taking into account research design, the authors further disentangled meta-analytic findings. For example, domain-specific teaching components were mainly studied with quasi-experimental or experimental designs. Finally, correlational survey studies dominated teaching effectiveness studies in the past decade but proved to be more distal from the teaching–learning process.", "title": "" }, { "docid": "9a38b18bd69d17604b6e05b9da450c2d", "text": "New invention of advanced technology, enhanced capacity of storage media, maturity of information technology and popularity of social media, business intelligence and Scientific invention, produces huge amount of data which made ample set of information that is responsible for birth of new concept well known as big data. Big data analytics is the process of examining large amounts of data. The analysis is done on huge amount of data which is structure, semi structure and unstructured. In big data, data is generated at exponentially for reason of increase use of social media, email, document and sensor data. The growth of data has affected all fields, whether it is business sector or the world of science. In this paper, the process of system is reviewed for managing &quot;Big Data&quot; and today&apos;s activities on big data tools and techniques.", "title": "" }, { "docid": "9bf99d48bc201147a9a9ad5af547a002", "text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.", "title": "" }, { "docid": "a36e43f03735d7610677465bd78e9b6f", "text": "Existing Poisson mesh editing techniques mainly focus on designing schemes to propagate deformation from a given boundary condition to a region of interest. Although solving the Poisson system in the least-squares sense distributes the distortion errors over the entire region of interest, large deformation in the boundary condition might still lead to severely distorted results. We propose to optimize the boundary condition (the merging boundary) for Poisson mesh merging. The user needs only to casually mark a source region and a target region. Our algorithm automatically searches for an optimal boundary condition within the marked regions such that the change of the found boundary during merging is minimal in terms of similarity transformation. Experimental results demonstrate that our merging tool is easy to use and produces visually better merging results than unoptimized techniques.", "title": "" }, { "docid": "3c848d254ae907a75dcbf502ed94aa84", "text": "We study the problem of computing routes for electric vehicles (EVs) in road networks. Since their battery capacity is limited, and consumed energy per distance increases with velocity, driving the fastest route is often not desirable and may even be infeasible. On the other hand, the energy-optimal route may be too conservative in that it contains unnecessary detours or simply takes too long. In this work, we propose to use multicriteria optimization to obtain Pareto sets of routes that trade energy consumption for speed. In particular, we exploit the fact that the same road segment can be driven at different speeds within reasonable intervals. As a result, we are able to provide routes with low energy consumption that still follow major roads, such as freeways. Unfortunately, the size of the resulting Pareto sets can be too large to be practical. We therefore also propose several nontrivial techniques that can be applied on-line at query time in order to speed up computation and filter insignificant solutions from the Pareto sets. Our extensive experimental study, which uses a real-world energy consumption model, reveals that we are able to compute diverse sets of alternative routes on continental networks that closely resemble the exact Pareto set in just under a second—several orders of magnitude faster than the exhaustive algorithm. 1998 ACM Subject Classification G.2.2 Graph Theory, G.2.3 Applications", "title": "" }, { "docid": "abb45e408cb37a0ad89f0b810b7f583b", "text": "In a mobile computing environment, a user carrying a portable computer can execute a mobile t11m,,· action by submitting the ope.rations of the transaction to distributed data servers from different locations. M a result of this mobility, the operations of the transaction may be executed at different servers. The distribution oC operations implies that the transmission of messages (such as those involved in a two phase commit protocol) may be required among these data servers in order to coordinate the execution ofthese operations. In this paper, we will address the distribution oC operations that update partitioned data in mobile environments. We show that, for operations pertaining to resource allocation, the message overhead (e.g., for a 2PC protocol) introduced by the distribution of operations is undesirable and unnecessary. We introduce a new algorithm, the RenlnJation Algorithm (RA), that does not necessitate the incurring of message overheads Cor the commitment of mobile transactions. We address two issues related to the RA algorithm: a termination protocol and a protocol for non_partition.commutotive operation\". We perform a comparison between the proposed RA algorithm and existing solutions that use a 2PC protocol.", "title": "" }, { "docid": "ed3b8bfdd6048e4a07ee988f1e35fd21", "text": "Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean  ±  std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.", "title": "" }, { "docid": "ad7a5bccf168ac3b13e13ccf12a94f7d", "text": "As one of the most popular social media platforms today, Twitter provides people with an effective way to communicate and interact with each other. Through these interactions, influence among users gradually emerges and changes people's opinions. Although previous work has studied interpersonal influence as the probability of activating others during information diffusion, they ignore an important fact that information diffusion is the result of influence, while dynamic interactions among users produce influence. In this article, the authors propose a novel temporal influence model to learn users' opinion behaviors regarding a specific topic by exploring how influence emerges during communications. The experiments show that their model performs better than other influence models with different influence assumptions when predicting users' future opinions, especially for the users with high opinion diversity.", "title": "" }, { "docid": "c86aad62e950d7c10f93699d421492d5", "text": "Carotid intima-media thickness (CIMT) is a good surrogate for atherosclerosis. Hyperhomocysteinemia is an independent risk factor for cardiovascular diseases. We aim to investigate the relationships between homocysteine (Hcy) related biochemical indexes and CIMT, the associations between Hcy related SNPs and CIMT, as well as the potential gene–gene interactions. The present study recruited full siblings (186 eligible families with 424 individuals) with no history of cardiovascular events from a rural area of Beijing. We examined CIMT, intima-media thickness for common carotid artery (CCA-IMT) and carotid bifurcation, tested plasma levels for Hcy, vitamin B6 (VB6), vitamin B12 (VB12) and folic acid (FA), and genotyped 9 SNPs on MTHFR, MTR, MTRR, BHMT, SHMT1, CBS genes. Associations between SNPs and biochemical indexes and CIMT indexes were analyzed using family-based association test analysis. We used multi-level mixed-effects regression model to verify SNP-CIMT associations and to explore the potential gene–gene interactions. VB6, VB12 and FA were negatively correlated with CIMT indexes (p < 0.05). rs2851391 T allele was associated with decreased plasma VB12 levels (p = 0.036). In FABT, CBS rs2851391 was significantly associated with CCA-IMT (p = 0.021) and CIMT (p = 0.019). In multi-level mixed-effects regression model, CBS rs2851391 was positively significantly associated with CCA-IMT (Coef = 0.032, se = 0.009, raw p < 0.001) after Bonferoni correction (corrected α = 0.0056). Gene–gene interactions were found between CBS rs2851391 and BHMT rs10037045 for CCA-IMT (p = 0.011), as well as between CBS rs2851391 and MTR rs1805087 for CCA-IMT (p = 0.007) and CIMT (p = 0.022). Significant associations are found between Hcy metabolism related genetic polymorphisms, biochemical indexes and CIMT indexes. There are complex interactions between genetic polymorphisms for CCA-IMT and CIMT.", "title": "" } ]
scidocsrr
7dc33ca0df883f80793682ba14baff7a
Three-level neutral-point-clamped inverters in transformerless PV systems — State of the art
[ { "docid": "a0e7cdeefc33d4078702e5368dd9f5b9", "text": "This paper presents a single-phase five-level photovoltaic (PV) inverter topology for grid-connected PV systems with a novel pulsewidth-modulated (PWM) control scheme. Two reference signals identical to each other with an offset equivalent to the amplitude of the triangular carrier signal were used to generate PWM signals for the switches. A digital proportional-integral current control algorithm is implemented in DSP TMS320F2812 to keep the current injected into the grid sinusoidal and to have high dynamic performance with rapidly changing atmospheric conditions. The inverter offers much less total harmonic distortion and can operate at near-unity power factor. The proposed system is verified through simulation and is implemented in a prototype, and the experimental results are compared with that with the conventional single-phase three-level grid-connected PWM inverter.", "title": "" } ]
[ { "docid": "2220633d6343df0ebb2d292358ce182b", "text": "This paper presents a system for fully automatic recognition and reconstruction of 3D objects in image databases. We pose the object recognition problem as one of finding consistent matches between all images, subject to the constraint that the images were taken from a perspective camera. We assume that the objects or scenes are rigid. For each image, we associate a camera matrix, which is parameterised by rotation, translation and focal length. We use invariant local features to find matches between all images, and the RANSAC algorithm to find those that are consistent with the fundamental matrix. Objects are recognised as subsets of matching images. We then solve for the structure and motion of each object, using a sparse bundle adjustment algorithm. Our results demonstrate that it is possible to recognise and reconstruct 3D objects from an unordered image database with no user input at all.", "title": "" }, { "docid": "752e6d6f34ffc638e9a0d984a62db184", "text": "Defect prediction models are classifiers that are trained to identify defect-prone software modules. Such classifiers have configurable parameters that control their characteristics (e.g., the number of trees in a random forest classifier). Recent studies show that these classifiers may underperform due to the use of suboptimal default parameter settings. However, it is impractical to assess all of the possible settings in the parameter spaces. In this paper, we investigate the performance of defect prediction models where Caret --- an automated parameter optimization technique --- has been applied. Through a case study of 18 datasets from systems that span both proprietary and open source domains, we find that (1) Caret improves the AUC performance of defect prediction models by as much as 40 percentage points; (2) Caret-optimized classifiers are at least as stable as (with 35% of them being more stable than) classifiers that are trained using the default settings; and (3) Caret increases the likelihood of producing a top-performing classifier by as much as 83%. Hence, we conclude that parameter settings can indeed have a large impact on the performance of defect prediction models, suggesting that researchers should experiment with the parameters of the classification techniques. Since automated parameter optimization techniques like Caret yield substantially benefits in terms of performance improvement and stability, while incurring a manageable additional computational cost, they should be included in future defect prediction studies.", "title": "" }, { "docid": "667a457dcb1f379abd4e355e429dc40d", "text": "BACKGROUND\nViolent death is a serious problem in the United States. Previous research showing US rates of violent death compared with other high-income countries used data that are more than a decade old.\n\n\nMETHODS\nWe examined 2010 mortality data obtained from the World Health Organization for populous, high-income countries (n = 23). Death rates per 100,000 population were calculated for each country and for the aggregation of all non-US countries overall and by age and sex. Tests of significance were performed using Poisson and negative binomial regressions.\n\n\nRESULTS\nUS homicide rates were 7.0 times higher than in other high-income countries, driven by a gun homicide rate that was 25.2 times higher. For 15- to 24-year-olds, the gun homicide rate in the United States was 49.0 times higher. Firearm-related suicide rates were 8.0 times higher in the United States, but the overall suicide rates were average. Unintentional firearm deaths were 6.2 times higher in the United States. The overall firearm death rate in the United States from all causes was 10.0 times higher. Ninety percent of women, 91% of children aged 0 to 14 years, 92% of youth aged 15 to 24 years, and 82% of all people killed by firearms were from the United States.\n\n\nCONCLUSIONS\nThe United States has an enormous firearm problem compared with other high-income countries, with higher rates of homicide and firearm-related suicide. Compared with 2003 estimates, the US firearm death rate remains unchanged while firearm death rates in other countries decreased. Thus, the already high relative rates of firearm homicide, firearm suicide, and unintentional firearm death in the United States compared with other high-income countries increased between 2003 and 2010.", "title": "" }, { "docid": "b42e92aba32ff037362ecc40b816d063", "text": "In this paper we discuss security issues for cloud computing including storage security, data security, and network security and secure virtualization. Then we select some topics and describe them in more detail. In particular, we discuss a scheme for secure third party publications of documents in a cloud. Next we discuss secure federated query processing with map Reduce and Hadoop. Next we discuss the use of secure coprocessors for cloud computing. Third we discuss XACML implementation for Hadoop. We believe that building trusted applications from untrusted components will be a major aspect of secure cloud computing.", "title": "" }, { "docid": "2ecd815af00b9961259fa9b2a9185483", "text": "This paper describes the current development status of a mobile robot designed to inspect the outer surface of large oil ship hulls and floating production storage and offloading platforms. These vessels require a detailed inspection program, using several nondestructive testing techniques. A robotic crawler designed to perform such inspections is presented here. Locomotion over the hull is provided through magnetic tracks, and the system is controlled by two networked PCs and a set of custom hardware devices to drive motors, video cameras, ultrasound, inertial platform, and other devices. Navigation algorithm uses an extended-Kalman-filter (EKF) sensor-fusion formulation, integrating odometry and inertial sensors. It was shown that the inertial navigation errors can be decreased by selecting appropriate Q and R matrices in the EKF formulation.", "title": "" }, { "docid": "5343db8a8bc5e300b9ad488d0eda56d4", "text": "The paper analyzes some forms of linguistic ambiguity in English in a specific register, i.e. newspaper headlines. In particular, the focus of the research is on examples of lexical and syntactic ambiguity that result in sources of voluntary or involuntary humor. The study is based on a corpus of 135 verbally ambiguous headlines found on web sites presenting humorous bits of information. The linguistic phenomena that contribute to create this kind of semantic confusion in headlines will be analyzed and divided into the three main categories of lexical, syntactic, and phonological ambiguity, and examples from the corpus will be discussed for each category. The main results of the study were that, firstly, contrary to the findings of previous research on jokes, syntactically ambiguous headlines were found in good percentage in the corpus and that this might point to differences in genre. Secondly, two new configurations for the processing of the disjunctor/connector order were found. In the first of these configurations the disjunctor appears before the connector, instead of being placed after or coinciding with the ambiguous element, while in the second, two ambiguous elements are present, each of which functions both as a connector and a disjunctor.", "title": "" }, { "docid": "9cb13d599da25991d11d276aaa76a005", "text": "We propose a quasi real-time method for discrimination of ventricular ectopic beats from both supraventricular and paced beats in the electrocardiogram (ECG). The heartbeat waveforms were evaluated within a fixed-length window around the fiducial points (100 ms before, 450 ms after). Our algorithm was designed to operate with minimal expert intervention and we define that the operator is required only to initially select up to three ‘normal’ heartbeats (the most frequently seen supraventricular or paced complexes). These were named original QRS templates and their copies were substituted continuously throughout the ECG analysis to capture slight variations in the heartbeat waveforms of the patient’s sustained rhythm. The method is based on matching of the evaluated heartbeat with the QRS templates by a complex set of ECG descriptors, including maximal cross-correlation, area difference and frequency spectrum difference. Temporal features were added by analyzing the R-R intervals. The classification criteria were trained by statistical assessment of the ECG descriptors calculated for all heartbeats in MIT-BIH Supraventricular Arrhythmia Database. The performance of the classifiers was tested on the independent MIT-BIH Arrhythmia Database. The achieved unbiased accuracy is represented by sensitivity of 98.4% and specificity of 98.86%, both being competitive to other published studies. The provided computationally efficient techniques enable the fast post-recording analysis of lengthy Holter-monitor ECG recordings, as well as they can serve as a quasi real-time detection method embedded into surface ECG monitors.", "title": "" }, { "docid": "3a852aa880c564a85cc8741ce7427ced", "text": "INTRODUCTION\nTumeric is a spice that comes from the root Curcuma longa, a member of the ginger family, Zingaberaceae. In Ayurveda (Indian traditional medicine), tumeric has been used for its medicinal properties for various indications and through different routes of administration, including topically, orally, and by inhalation. Curcuminoids are components of tumeric, which include mainly curcumin (diferuloyl methane), demethoxycurcumin, and bisdemethoxycurcmin.\n\n\nOBJECTIVES\nThe goal of this systematic review of the literature was to summarize the literature on the safety and anti-inflammatory activity of curcumin.\n\n\nMETHODS\nA search of the computerized database MEDLINE (1966 to January 2002), a manual search of bibliographies of papers identified through MEDLINE, and an Internet search using multiple search engines for references on this topic was conducted. The PDR for Herbal Medicines, and four textbooks on herbal medicine and their bibliographies were also searched.\n\n\nRESULTS\nA large number of studies on curcumin were identified. These included studies on the antioxidant, anti-inflammatory, antiviral, and antifungal properties of curcuminoids. Studies on the toxicity and anti-inflammatory properties of curcumin have included in vitro, animal, and human studies. A phase 1 human trial with 25 subjects using up to 8000 mg of curcumin per day for 3 months found no toxicity from curcumin. Five other human trials using 1125-2500 mg of curcumin per day have also found it to be safe. These human studies have found some evidence of anti-inflammatory activity of curcumin. The laboratory studies have identified a number of different molecules involved in inflammation that are inhibited by curcumin including phospholipase, lipooxygenase, cyclooxygenase 2, leukotrienes, thromboxane, prostaglandins, nitric oxide, collagenase, elastase, hyaluronidase, monocyte chemoattractant protein-1 (MCP-1), interferon-inducible protein, tumor necrosis factor (TNF), and interleukin-12 (IL-12).\n\n\nCONCLUSIONS\nCurcumin has been demonstrated to be safe in six human trials and has demonstrated anti-inflammatory activity. It may exert its anti-inflammatory activity by inhibition of a number of different molecules that play a role in inflammation.", "title": "" }, { "docid": "64c156ee4171b5b84fd4eedb1d922f55", "text": "We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy.", "title": "" }, { "docid": "c6029c95b8a6b2c6dfb688ac049427dc", "text": "This paper presents development of a two-fingered robotic device for amputees whose hands are partially impaired. In this research, we focused on developing a compact and lightweight robotic finger system, so the target amputee would be able to execute simple activities in daily living (ADL), such as grasping a bottle or a cup for a long time. The robotic finger module was designed by considering the impaired shape and physical specifications of the target patient's hand. The proposed prosthetic finger was designed using a linkage mechanism which was able to create underactuated finger motion. This underactuated mechanism contributes to minimizing the number of required actuators for finger motion. In addition, the robotic finger was not driven by an electro-magnetic rotary motor, but a shape-memory alloy (SMA) actuator. Having a driving method using SMA wire contributed to reducing the total weight of the prosthetic robot finger as it has higher energy density than that offered by the method using the electrical DC motor. In this paper, we confirmed the performance of the proposed robotic finger by fundamental driving tests and the characterization of the SMA actuator.", "title": "" }, { "docid": "17d1439650efccf83390834ba933db1a", "text": "The arterial vascularization of the pineal gland (PG) remains a debatable subject. This study aims to provide detailed information about the arterial vascularization of the PG. Thirty adult human brains were obtained from routine autopsies. Cerebral arteries were separately cannulated and injected with colored latex. The dissections were carried out using a surgical microscope. The diameters of the branches supplying the PG at their origin and vascularization areas of the branches of the arteries were investigated. The main artery of the PG was the lateral pineal artery, and it originated from the posterior circulation. The other arteries included the medial pineal artery from the posterior circulation and the rostral pineal artery mainly from the anterior circulation. Posteromedial choroidal artery was an important artery that branched to the PG. The arterial supply to the PG was studied comprehensively considering the debate and inadequacy of previously published studies on this issue available in the literature. This anatomical knowledge may be helpful for surgical treatment of pathologies of the PG, especially in children who develop more pathology in this region than adults.", "title": "" }, { "docid": "1ddfbf702c35a689367cd2b27dc1c6c6", "text": "In this paper, we propose a simple but powerful prior, color attenuation prior, for haze removal from a single input hazy image. By creating a linear model for modelling the scene depth of the hazy image under this novel prior and learning the parameters of the model by using a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily remove haze from a single image. Experimental results show that the proposed approach is highly efficient and it outperforms state-of-the-art haze removal algorithms in terms of the dehazing effect as well.", "title": "" }, { "docid": "333fd7802029f38bda35cd2077e7de59", "text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.", "title": "" }, { "docid": "3bd2bfd1c7652f8655d009c085d6ed5c", "text": "The past decade has witnessed the boom of human-machine interactions, particularly via dialog systems. In this paper, we study the task of response generation in open-domain multi-turn dialog systems. Many research efforts have been dedicated to building intelligent dialog systems, yet few shed light on deepening or widening the chatting topics in a conversational session, which would attract users to talk more. To this end, this paper presents a novel deep scheme consisting of three channels, namely global, wide, and deep ones. The global channel encodes the complete historical information within the given context, the wide one employs an attention-based recurrent neural network model to predict the keywords that may not appear in the historical context, and the deep one trains a Multi-layer Perceptron model to select some keywords for an in-depth discussion. Thereafter, our scheme integrates the outputs of these three channels to generate desired responses. To justify our model, we conducted extensive experiments to compare our model with several state-of-the-art baselines on two datasets: one is constructed by ourselves and the other is a public benchmark dataset. Experimental results demonstrate that our model yields promising performance by widening or deepening the topics of interest.", "title": "" }, { "docid": "d473619f76f81eced041df5bc012c246", "text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.", "title": "" }, { "docid": "17676785398d4ed24cc04cb3363a7596", "text": "Generative models (GMs) such as Generative Adversary Network (GAN) and Variational Auto-Encoder (VAE) have thrived these years and achieved high quality results in generating new samples. Especially in Computer Vision, GMs have been used in image inpainting, denoising and completion, which can be treated as the inference from observed pixels to corrupted pixels. However, images are hierarchically structured which are quite different from many real-world inference scenarios with non-hierarchical features. These inference scenarios contain heterogeneous stochastic variables and irregular mutual dependences. Traditionally they are modeled by Bayesian Network (BN). However, the learning and inference of BN model are NP-hard thus the number of stochastic variables in BN is highly constrained. In this paper, we adapt typical GMs to enable heterogeneous learning and inference in polynomial time. We also propose an extended autoregressive (EAR) model and an EAR with adversary loss (EARA) model and give theoretical results on their effectiveness. Experiments on several BN datasets show that our proposed EAR model achieves the best performance in most cases compared to other GMs. Except for black box analysis, we’ve also done a serial of experiments on Markov border inference of GMs for white box analysis and give theoretical results.", "title": "" }, { "docid": "4b74b9d4c4b38082f9f667e363f093b2", "text": "We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine. Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched. The categories are classes of biological concepts (e.g., gene, allele, cell or cell group, phenotype, etc.) and classes that relate two objects (e.g., association, regulation, etc.) or describe one (e.g., biological process, etc.). Together they form a catalog of types of objects and concepts called an ontology. After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories. The current ontology comprises 33 categories of terms. A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries. Full text access increases recall of biological data types from 45% to 95%. Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency. Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts. The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database. Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other organism-specific corpora of text. Textpresso can be accessed at http://www.textpresso.org or via WormBase at http://www.wormbase.org.", "title": "" }, { "docid": "b885526ab7db7d7ed502698758117c80", "text": "Cancer, more than any other human disease, now has a surfeit of potential molecular targets poised for therapeutic exploitation. Currently, a number of attractive and validated cancer targets remain outside of the reach of pharmacological regulation. Some have been described as undruggable, at least by traditional strategies. In this article, we outline the basis for the undruggable moniker, propose a reclassification of these targets as undrugged, and highlight three general classes of this imposing group as exemplars with some attendant strategies currently being explored to reclassify them. Expanding the spectrum of disease-relevant targets to pharmacological manipulation is central to reducing cancer morbidity and mortality.", "title": "" }, { "docid": "ec0733962301d6024da773ad9d0f636d", "text": "This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators.", "title": "" }, { "docid": "21c7cbcf02141c60443f912ae5f1208b", "text": "A novel driving scheme based on simultaneous emission is reported for 2D/3D AMOLED TVs. The new method reduces leftright crosstalk without sacrificing luminance. The new scheme greatly simplifies the pixel circuit as the number of transistors for Vth compensation is reduced from 6 to 3. The capacitive load of scan lines is reduced by 48%, enabling very high refresh rate (240 Hz).", "title": "" } ]
scidocsrr
85d2de403377831ff1a6f5b7c671d438
Discrimination of focal and non-focal EEG signals using entropy-based features in EEMD and CEEMDAN domains
[ { "docid": "8ff0683625b483ed1e77b1720bcc0a15", "text": "A new Ensemble Empirical Mode Decomposition (EEMD) is presented. This new approach consists of sifting an ensemble of white noise-added signal (data) and treats the mean as the final true result. Finite, not infinitesimal, amplitude white noise is necessary to force the ensemble to exhaust all possible solutions in the sifting process, thus making the different scale signals to collate in the proper intrinsic mode functions (IMF) dictated by the dyadic filter banks. As EEMD is a time–space analysis method, the added white noise is averaged out with sufficient number of trials; the only persistent part that survives the averaging process is the component of the signal (original data), which is then treated as the true and more physical meaningful answer. The effect of the added white noise is to provide a uniform reference frame in the time–frequency space; therefore, the added noise collates the portion of the signal of comparable scale in one IMF. With this ensemble mean, one can separate scales naturally without any a priori subjective criterion selection as in the intermittence test for the original EMD algorithm. This new approach utilizes the full advantage of the statistical characteristics of white noise to perturb the signal in its true solution neighborhood, and to cancel itself out after serving its purpose; therefore, it represents a substantial improvement over the original EMD and is a truly noise-assisted data analysis (NADA) method.", "title": "" } ]
[ { "docid": "43919b011f7d65d82d03bb01a5e85435", "text": "Self-inflicted burns are regularly admitted to burns units worldwide. Most of these patients are referred to psychiatric services and are successfully treated however some return to hospital with recurrent self-inflicted burns. The aim of this study is to explore the characteristics of the recurrent self-inflicted burn patients admitted to the Royal North Shore Hospital during 2004-2011. Burn patients were drawn from a computerized database and recurrent self-inflicted burn patients were identified. Of the total of 1442 burn patients, 40 (2.8%) were identified as self-inflicted burns. Of these patients, 5 (0.4%) were identified to have sustained previous self-inflicted burns and were interviewed by a psychiatrist. Each patient had been diagnosed with a borderline personality disorder and had suffered other forms of deliberate self-harm. Self-inflicted burns were utilized to relieve or help regulate psychological distress, rather than to commit suicide. Most patients had a history of emotional neglect, physical and/or sexual abuse during their early life experience. Following discharge from hospital, the patients described varying levels of psychiatric follow-up, from a post-discharge review at a local community mental health centre to twice-weekly psychotherapy. The patients who engaged in regular psychotherapy described feeling more in control of their emotions and reported having a longer period of abstinence from self-inflicted burn. Although these patients represent a small proportion of all burns, the repeat nature of their injuries led to a significant use of clinical resources. A coordinated and consistent treatment pathway involving surgical and psychiatric services for recurrent self-inflicted burns may assist in the management of these challenging patients.", "title": "" }, { "docid": "916e10c8bd9f5aa443fa4d8316511c94", "text": "A full-bridge LLC resonant converter with series-parallel connected transformers for an onboard battery charger of electric vehicles is proposed, which can realize zero voltage switching turn-on of power switches and zero current switching turn-off of rectifier diodes. In this converter, two same small transformers are employed instead of the single transformer in the traditional LLC resonant converter. The primary windings of these two transformers are series-connected to obtain equal primary current, while the secondary windings are parallel-connected to be provided with the same secondary voltage, so the power can be automatically balanced. Series-connection can reduce the turns of primary windings. Parallel-connection can reduce the current stress of the secondary windings and the conduction loss of rectifier diodes. Compared with the traditional LLC resonant converter with single transformer under same power level, the smaller low-profile cores can be used to reduce the transformers loss and improve heat dissipation. In this paper, the operating principle, steady state analysis, and design of the proposed converter are described, simulation and experimental prototype of the proposed LLC converter is established to verify the effectiveness of the proposed converter.", "title": "" }, { "docid": "91f20c48f5a4329260aadb87a0d8024c", "text": "In this paper, we survey key design for manufacturing issues for extreme scaling with emerging nanolithography technologies, including double/multiple patterning lithography, extreme ultraviolet lithography, and electron-beam lithography. These nanolithography and nanopatterning technologies have different manufacturing processes and their unique challenges to very large scale integration (VLSI) physical design, mask synthesis, and so on. It is essential to have close VLSI design and underlying process technology co-optimization to achieve high product quality (power/performance, etc.) and yield while making future scaling cost-effective and worthwhile. Recent results and examples will be discussed to show the enablement and effectiveness of such design and process integration, including lithography model/analysis, mask synthesis, and lithography friendly physical design.", "title": "" }, { "docid": "f76717050a5d891f63e475ba3e3ff955", "text": "Computational Advertising is the currently emerging multidimensional statistical modeling sub-discipline in digital advertising industry. Web pages visited per user every day is considerably increasing, resulting in an enormous access to display advertisements (ads). The rate at which the ad is clicked by users is termed as the Click Through Rate (CTR) of an advertisement. This metric facilitates the measurement of the effectiveness of an advertisement. The placement of ads in appropriate location leads to the rise in the CTR value that influences the growth of customer access to advertisement resulting in increased profit rate for the ad exchange, publishers and advertisers. Thus it is imperative to predict the CTR metric in order to formulate an efficient ad placement strategy. This paper proposes a predictive model that generates the click through rate based on different dimensions of ad placement for display advertisements using statistical machine learning regression techniques such as multivariate linear regression (LR), poisson regression (PR) and support vector regression(SVR). The experiment result reports that SVR based click model outperforms in predicting CTR through hyperparameter optimization.", "title": "" }, { "docid": "210a1dda2fc4390a5b458528b176341e", "text": "Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. However, it remains unclear and challenging to make use of this intrinsic property via deep networks. In this paper, we propose a non-local recurrent network (NLRN) as the first attempt to incorporate non-local operations into a recurrent neural network (RNN) for image restoration. The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks for end-to-end training to capture deep feature correlation between each location and its neighborhood. (2) We fully employ the RNN structure for its parameter efficiency and allow deep feature correlation to be propagated along adjacent recurrent states. This new design boosts robustness against inaccurate correlation estimation due to severely degraded images. (3) We show that it is essential to maintain a confined neighborhood for computing deep feature correlation given degraded images. This is in contrast to existing practice [43] that deploys the whole image. Extensive experiments on both image denoising and super-resolution tasks are conducted. Thanks to the recurrent non-local operations and correlation propagation, the proposed NLRN achieves superior results to state-of-the-art methods with many fewer parameters. The code is available at https://github.com/Ding-Liu/NLRN.", "title": "" }, { "docid": "788ee16a0f05fe09340e80f14722ee77", "text": "This paper presents an approach for detecting anomalous events in videos with crowds. The main goal is to recognize patterns that might lead to an anomalous event. An anomalous event might be characterized by the deviation from the normal or usual, but not necessarily in an undesirable manner, e.g., an anomalous event might just be different from normal but not a suspicious event from the surveillance point of view. One of the main challenges of detecting such events is the difficulty to create models due to their unpredictability and their dependency on the context of the scene. Based on these challenges, we present a model that uses general concepts, such as orientation, velocity, and entropy to capture anomalies. Using such a type of information, we can define models for different cases and environments. Assuming images captured from a single static camera, we propose a novel spatiotemporal feature descriptor, called histograms of optical flow orientation and magnitude and entropy, based on optical flow information. To determine the normality or abnormality of an event, the proposed model is composed of training and test steps. In the training, we learn the normal patterns. Then, during test, events are described and if they differ significantly from the normal patterns learned, they are considered as anomalous. The experimental results demonstrate that our model can handle different situations and is able to recognize anomalous events with success. We use the well-known UCSD and Subway data sets and introduce a new data set, namely, Badminton.", "title": "" }, { "docid": "fbb48416c34d4faee1a87ac2efaf466d", "text": "Do unsupervised methods for learning rich, contextualized token representations obviate the need for explicit modeling of linguistic structure in neural network models for semantic role labeling (SRL)? We address this question by incorporating the massively successful ELMo embeddings (Peters et al., 2018) into LISA (Strubell et al., 2018), a strong, linguisticallyinformed neural network architecture for SRL. In experiments on the CoNLL-2005 shared task we find that though ELMo outperforms typical word embeddings, beginning to close the gap in F1 between LISA with predicted and gold syntactic parses, syntactically-informed models still outperform syntax-free models when both use ELMo, especially on out-of-domain data. Our results suggest that linguistic structures are indeed still relevant in this golden age of deep learning for NLP.", "title": "" }, { "docid": "7ba61c8c5eba7d8140c84b3e7cbc851a", "text": "One of the aims of modern First-Person Shooter (FPS ) design is to provide an immersive experience to the player. This paper examines the role of sound in enabling s uch immersion and argues that, even in ‘realism’ FPS ga mes, it may be achieved sonically through a focus on carica ture rather than realism. The paper utilizes and develo ps previous work in which both a conceptual framework for the d sign and analysis of run and gun FPS sound is developed and the notion of the relationship between player and FPS soundscape as an acoustic ecology is put forward (G rimshaw and Schott 2007a; Grimshaw and Schott 2007b). Some problems of sound practice and sound reproduction i n the game are highlighted and a conceptual solution is p roposed.", "title": "" }, { "docid": "f7fa80456b0fb479bc694cb89fbd84e5", "text": "In the past two decades, social capital in its various forms and contexts has emerged as one of the most salient concepts in social sciences. While much excitement has been generated, divergent views, perspectives, and expectations have also raised the serious question : is it a fad or does it have enduring qualities that will herald a new intellectual enterprise? This presentation's purpose is to review social capital as discussed in the literature, identify controversies and debates, consider some critical issues, and propose conceptual and research strategies in building a theory. I will argue that such a theory and the research enterprise must be based on the fundamental understanding that social capital is captured from embedded resources in social networks . Deviations from this understanding in conceptualization and measurement lead to confusion in analyzing causal mechanisms in the macroand microprocesses. It is precisely these mechanisms and processes, essential for an interactive theory about structure and action, to which social capital promises to make contributions .", "title": "" }, { "docid": "45578369630e65fe60be3495767d1367", "text": "The performance of brain-machine interfaces (BMIs) that continuously control upper limb neuroprostheses may benefit from distinguishing periods of posture and movement so as to prevent inappropriate movement of the prosthesis. Few studies, however, have investigated how decoding behavioral states and detecting the transitions between posture and movement could be used autonomously to trigger a kinematic decoder. We recorded simultaneous neuronal ensemble and local field potential (LFP) activity from microelectrode arrays in primary motor cortex (M1) and dorsal (PMd) and ventral (PMv) premotor areas of two male rhesus monkeys performing a center-out reach-and-grasp task, while upper limb kinematics were tracked with a motion capture system with markers on the dorsal aspect of the forearm, hand, and fingers. A state decoder was trained to distinguish four behavioral states (baseline, reaction, movement, hold), while a kinematic decoder was trained to continuously decode hand end point position and 18 joint angles of the wrist and fingers. LFP amplitude most accurately predicted transition into the reaction (62%) and movement (73%) states, while spikes most accurately decoded arm, hand, and finger kinematics during movement. Using an LFP-based state decoder to trigger a spike-based kinematic decoder [r = 0.72, root mean squared error (RMSE) = 0.15] significantly improved decoding of reach-to-grasp movements from baseline to final hold, compared with either a spike-based state decoder combined with a spike-based kinematic decoder (r = 0.70, RMSE = 0.17) or a spike-based kinematic decoder alone (r = 0.67, RMSE = 0.17). Combining LFP-based state decoding with spike-based kinematic decoding may be a valuable step toward the realization of BMI control of a multifingered neuroprosthesis performing dexterous manipulation.", "title": "" }, { "docid": "ecd79e88962ca3db82eaf2ab94ecd5f4", "text": "Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.", "title": "" }, { "docid": "c9b6f91a7b69890db88b929140f674ec", "text": "Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.", "title": "" }, { "docid": "f4639c2523687aa0d5bfdd840df9cfa4", "text": "This established database of manufacturers and thei r design specification, determined the condition and design of the vehicle based on the perception and preference of jeepney drivers and passengers, and compared the pa rts of the jeepney vehicle using Philippine National Standards and international sta ndards. The study revealed that most jeepney manufacturing firms have varied specificati ons with regard to the capacity, dimensions and weight of the vehicle and similar sp ecification on the parts and equipment of the jeepney vehicle. Most of the jeepney drivers an d passengers want to improve, change and standardize the parts of the jeepney vehicle. The p arts of jeepney vehicles have similar specifications compared to the 4 out of 5 mandatory PNS and 22 out 32 UNECE Regulations applicable for jeepney vehicle. It is concluded tha t t e jeepney vehicle can be standardized in terms of design, safety and environmental concerns.", "title": "" }, { "docid": "f90fcd27a0ac4a22dc5f229f826d64bf", "text": "While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari 2600 environments. In particular, we focus on using saliency maps to understand how an agent learns and executes a policy. We introduce a method for generating useful saliency maps and use it to show 1) what strong agents attend to, 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during learning. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Overall, our results show that saliency information can provide significant insight into an RL agent’s decisions and learning behavior.", "title": "" }, { "docid": "8800dba6bb4cea195c8871eb5be5b0a8", "text": "Text summarization and sentiment classification, in NLP, are two main tasks implemented on text analysis, focusing on extracting the major idea of a text at different levels. Based on the characteristics of both, sentiment classification can be regarded as a more abstractive summarization task. According to the scheme, a Self-Attentive Hierarchical model for jointly improving text Summarization and Sentiment Classification (SAHSSC) is proposed in this paper. This model jointly performs abstractive text summarization and sentiment classification within a hierarchical end-to-end neural framework, in which the sentiment classification layer on top of the summarization layer predicts the sentiment label in the light of the text and the generated summary. Furthermore, a self-attention layer is also proposed in the hierarchical framework, which is the bridge that connects the summarization layer and the sentiment classification layer and aims at capturing emotional information at text-level as well as summary-level. The proposed model can generate a more relevant summary and lead to a more accurate summary-aware sentiment prediction. Experimental results evaluated on SNAP amazon online review datasets show that our model outperforms the state-of-the-art baselines on both abstractive text summarization and sentiment classification by a considerable margin.", "title": "" }, { "docid": "17ed907c630ec22cbbb5c19b5971238d", "text": "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.", "title": "" }, { "docid": "1b9ecdeb1df8eaf7cfef88acbe093d78", "text": "Chemical databases store information in text representations, and the SMILES format is a universal standard used in many cheminformatics so‰ware. Encoded in each SMILES string is structural information that can be used to predict complex chemical properties. In this work, we develop SMILES2vec, a deep RNN that automatically learns features from SMILES to predict chemical properties, without the need for additional explicit feature engineering. Using Bayesian optimization methods to tune the network architecture, we show that an optimized SMILES2vec model can serve as a general-purpose neural network for predicting distinct chemical properties including toxicity, activity, solubility and solvation energy, while also outperforming contemporary MLP neural networks that uses engineered features. Furthermore, we demonstrate proof-of-concept of interpretability by developing an explanation mask that localizes on the most important characters used in making a prediction. When tested on the solubility dataset, it identi€ed speci€c parts of a chemical that is consistent with established €rst-principles knowledge with an accuracy of 88%. Our work demonstrates that neural networks can learn technically accurate chemical concept and provide state-of-the-art accuracy, making interpretable deep neural networks a useful tool of relevance to the chemical industry.", "title": "" }, { "docid": "e139355ddbe5a8d6293f028e379abc93", "text": "The IoT is a network of interconnected everyday objects called “things” that have been augmented with a small measure of computing capabilities. Lately, the IoT has been affected by a variety of different botnet activities. As botnets have been the cause of serious security risks and financial damage over the years, existing Network forensic techniques cannot identify and track current sophisticated methods of botnets. This is because commercial tools mainly depend on signature-based approaches that cannot discover new forms of botnet. In literature, several studies have conducted the use of Machine Learning (ML) techniques in order to train and validate a model for defining such attacks, but they still produce high false alarm rates with the challenge of investigating the tracks of botnets. This paper investigates the role of ML techniques for developing a Network forensic mechanism based on network flow identifiers that can track suspicious activities of botnets. The experimental results using the UNSW-NB15 dataset revealed that ML techniques with flow identifiers can effectively and efficiently detect botnets’ attacks and their tracks.", "title": "" }, { "docid": "f01a19652bff88923a3141fb56d805e2", "text": "This paper presents a visible light communication system, focusing mostly on the aspects related with the hardware design and implementation. The designed system is aimed to ensure a highly-reliable communication between a commercial LED-based traffic light and a receiver mounted on a vehicle. Enabling wireless data transfer between the road infrastructure and vehicles has the potential to significantly increase the safety and efficiency of the transportation system. The paper presents the advantages of the proposed system and explains same of the choices made in the implementation process.", "title": "" }, { "docid": "1157ced7937578d8a54bc9bb462b5706", "text": "In recent years, the problem of associating a sentence with an image has gained a lot of attention. This work continues to push the envelope and makes further progress in the performance of image annotation and image search by a sentence tasks. In this work, we are using the Fisher Vector as a sentence representation by pooling the word2vec embedding of each word in the sentence. The Fisher Vector is typically taken as the gradients of the log-likelihood of descriptors, with respect to the parameters of a Gaussian Mixture Model (GMM). In this work we present two other Mixture Models and derive their Expectation-Maximization and Fisher Vector expressions. The first is a Laplacian Mixture Model (LMM), which is based on the Laplacian distribution. The second Mixture Model presented is a Hybrid Gaussian-Laplacian Mixture Model (HGLMM) which is based on a weighted geometric mean of the Gaussian and Laplacian distribution. Finally, by using the new Fisher Vectors derived from HGLMMs to represent sentences, we achieve state-of-the-art results for both the image annotation and the image search by a sentence tasks on four benchmarks: Pascal1K, Flickr8K, Flickr30K, and COCO.", "title": "" } ]
scidocsrr
29cff8a03006ac91f79f8f420d2267d2
Driver Action Prediction Using Deep (Bidirectional) Recurrent Neural Network
[ { "docid": "1169d70de6d0c67f52ecac4d942d2224", "text": "All drivers have habits behind the wheel. Different drivers vary in how they hit the gas and brake pedals, how they turn the steering wheel, and how much following distance they keep to follow a vehicle safely and comfortably. In this paper, we model such driving behaviors as car-following and pedal operation patterns. The relationship between following distance and velocity mapped into a two-dimensional space is modeled for each driver with an optimal velocity model approximated by a nonlinear function or with a statistical method of a Gaussian mixture model (GMM). Pedal operation patterns are also modeled with GMMs that represent the distributions of raw pedal operation signals or spectral features extracted through spectral analysis of the raw pedal operation signals. The driver models are evaluated in driver identification experiments using driving signals collected in a driving simulator and in a real vehicle. Experimental results show that the driver model based on the spectral features of pedal operation signals efficiently models driver individual differences and achieves an identification rate of 76.8% for a field test with 276 drivers, resulting in a relative error reduction of 55% over driver models that use raw pedal operation signals without spectral analysis", "title": "" }, { "docid": "c1235195e9ce4a9db0e22b165915a5ff", "text": "Advanced Driver Assistance Systems (ADAS) have made driving safer over the last decade. They prepare vehicles for unsafe road conditions and alert drivers if they perform a dangerous maneuver. However, many accidents are unavoidable because by the time drivers are alerted, it is already too late. Anticipating maneuvers beforehand can alert drivers before they perform the maneuver and also give ADAS more time to avoid or prepare for the danger. In this work we propose a vehicular sensor-rich platform and learning algorithms for maneuver anticipation. For this purpose we equip a car with cameras, Global Positioning System (GPS), and a computing device to capture the driving context from both inside and outside of the car. In order to anticipate maneuvers, we propose a sensory-fusion deep learning architecture which jointly learns to anticipate and fuse multiple sensory streams. Our architecture consists of Recurrent Neural Networks (RNNs) that use Long Short-Term Memory (LSTM) units to capture long temporal dependencies. We propose a novel training procedure which allows the network to predict the future given only a partial temporal context. We introduce a diverse data set with 1180 miles of natural freeway and city driving, and show that we can anticipate maneuvers 3.5 seconds before they occur in realtime with a precision and recall of 90.5% and 87.4% respectively.", "title": "" } ]
[ { "docid": "601d9060ac35db540cdd5942196db9e0", "text": "In this paper, we review nine visualization techniques that can be used for visual exploration of multidimensional financial data. We illustrate the use of these techniques by studying the financial performance of companies from the pulp and paper industry. We also illustrate the use of visualization techniques for detecting multivariate outliers, and other patterns in financial performance data in the form of clusters, relationships, and trends. We provide a subjective comparison between different visualization techniques as to their capabilities for providing insight into financial performance data. The strengths of each technique and the potential benefits of using multiple visualization techniques for gaining insight into financial performance data are highlighted.", "title": "" }, { "docid": "a3da533f428b101c8f8cb0de04546e48", "text": "In this paper we investigate the challenging problem of cursive text recognition in natural scene images. In particular, we have focused on isolated Urdu character recognition in natural scenes that could not be handled by tradition Optical Character Recognition (OCR) techniques developed for Arabic and Urdu scanned documents. We also present a dataset of Urdu characters segmented from images of signboards, street scenes, shop scenes and advertisement banners containing Urdu text. A variety of deep learning techniques have been proposed by researchers for natural scene text detection and recognition. In this work, a Convolutional Neural Network (CNN) is applied as a classifier, as CNN approaches have been reported to provide high accuracy for natural scene text detection and recognition. A dataset of manually segmented characters was developed and deep learning based data augmentation techniques were applied to further increase the size of the dataset. The training is formulated using filter sizes of 3x3, 5x5 and mixed 3x3 and 5x5 with a stride value of 1 and 2. The CNN model is trained with various learning rates and state-of-the-art results are achieved.", "title": "" }, { "docid": "06c65b566b298cc893388a6f317bfcb1", "text": "Emotion recognition from speech is one of the key steps towards emotional intelligence in advanced human-machine interaction. Identifying emotions in human speech requires learning features that are robust and discriminative across diverse domains that differ in terms of language, spontaneity of speech, recording conditions, and types of emotions. This corresponds to a learning scenario in which the joint distributions of features and labels may change substantially across domains. In this paper, we propose a deep architecture that jointly exploits a convolutional network for extracting domain-shared features and a long short-term memory network for classifying emotions using domain-specific features. We use transferable features to enable model adaptation from multiple source domains, given the sparseness of speech emotion data and the fact that target domains are short of labeled data. A comprehensive cross-corpora experiment with diverse speech emotion domains reveals that transferable features provide gains ranging from 4.3% to 18.4% in speech emotion recognition. We evaluate several domain adaptation approaches, and we perform an ablation study to understand which source domains add the most to the overall recognition effectiveness for a given target domain.", "title": "" }, { "docid": "1ea8990241b140c1c06d935a5f73abec", "text": "This paper presents design and implementation of a mobile embedded system to monitor and record key operation indicators of a distribution transformer like load currents, transformer oil and ambient temperatures. The proposed on-line monitoring system integrates a global service mobile (GSM) Modem, with stand alone single chip microcontroller and sensor packages. It is installed at the distribution transformer site and the above mentioned parameters are recorded using the built-in S-channel analog to digital converter (ADC) of the embedded system. The acquired parameters are processed and recorded in the system memory. If there is any abnormality or an emergency situation the system sends SMS (short message service) messages to designated mobile telephones containing information about the abnormality according to some predefined instructions and policies that are stored on the embedded system EEPROM. Also, it sends SMS to a central database via the GSM modem for further processing. This mobile system will help the utilities to optimally utilize transformers and identify problems before any catastrophic failure.", "title": "" }, { "docid": "62d86051d5f3f53f59547a98632c1e5c", "text": "Infantile hemangiomas are the most common benign vascular tumors in infancy and childhood. As hemangioma could regress spontaneously, it generally does not require treatment unless proliferation interferes with normal function or gives rise to risk of serious disfigurement and complications unlikely to resolve without treatment. Various methods for treating infant hemangiomas have been documented, including wait and see policy, laser therapy, drug therapy, sclerotherapy, radiotherapy, surgery and so on, but none of these therapies can be used for all hemangiomas. To obtain the best treatment outcomes, the treatment protocol should be individualized and comprehensive as well as sequential. Based on published literature and clinical experiences, we established a treatment guideline in order to provide criteria for the management of head and neck hemangiomas. This protocol will be renewed and updated to include and reflect any cutting-edge medical knowledge, and provide the newest treatment modalities which will benefit our patients.", "title": "" }, { "docid": "c07e6639d32403b267d9b6ef0f475d21", "text": "Exudates are the primary sign of Diabetic Retinopathy. Early detection can potentially reduce the risk of blindness. An automatic method to detect exudates from low-contrast digital images of retinopathy patients with non-dilated pupils using a Fuzzy C-Means (FCM) clustering is proposed. Contrast enhancement preprocessing is applied before four features, namely intensity, standard deviation on intensity, hue and a number of edge pixels, are extracted to supply as input parameters to coarse segmentation using FCM clustering method. The first result is then fine-tuned with morphological techniques. The detection results are validated by comparing with expert ophthalmologists' hand-drawn ground-truths. Sensitivity, specificity, positive predictive value (PPV), positive likelihood ratio (PLR) and accuracy are used to evaluate overall performance. It is found that the proposed method detects exudates successfully with sensitivity, specificity, PPV, PLR and accuracy of 87.28%, 99.24%, 42.77%, 224.26 and 99.11%, respectively.", "title": "" }, { "docid": "03b3aa5c74eb4d66c1bd969fbce835c7", "text": "In the past few decades, unmanned aerial vehicles (UAVs) have become promising mobile platforms capable of navigating semiautonomously or autonomously in uncertain environments. The level of autonomy and the flexible technology of these flying robots have rapidly evolved, making it possible to coordinate teams of UAVs in a wide spectrum of tasks. These applications include search and rescue missions; disaster relief operations, such as forest fires [1]; and environmental monitoring and surveillance. In some of these tasks, UAVs work in coordination with other robots, as in robot-assisted inspection at sea [2]. Recently, radio-controlled UAVs carrying radiation sensors and video cameras were used to monitor, diagnose, and evaluate the situation at Japans Fukushima Daiichi nuclear plant facility [3].", "title": "" }, { "docid": "753eb03a060a5e5999eee478d6d164f9", "text": "Recently reported results with distributed-vector word representations in natural language processing make them appealing for incorporation into a general cognitive architecture like Sigma. This paper describes a new algorithm for learning such word representations from large, shallow information resources, and how this algorithm can be implemented via small modifications to Sigma. The effectiveness and speed of the algorithm are evaluated via a comparison of an external simulation of it with state-of-the-art algorithms. The results from more limited experiments with Sigma are also promising, but more work is required for it to reach the effectiveness and speed of the simulation.", "title": "" }, { "docid": "cc31337277f8816eee0762fe47415f3f", "text": "Nowadays Photovoltaic (PV) plants have become significant investment projects with long term Return Of Investment (ROI). This is making investors together with operations managers to deal with reliable information on photovoltaic plants performances. Most of the information is gathered through data monitoring systems and also supplied by proper inverters in case of grid connected plants. It usually relates to series/parallel combinations of PV panels strings, in most cases, but rarely to individual PV panels. Furthermore, in case of huge dimensions PV plants, with different ground profiles, etc., should any adverse circumstances happen (panel failure, sudden shadowing, clouds, strong wind), it is difficult to identify the exact problem location. The use of distributed wired or wireless sensors can be a solution. Nevertheless, no one is problems free and all are significant cost. In this article is proposed a low cost DC Power Lines Communications (DC PLC) based PV plant parameters smart monitoring communications and control module. The aim is the development of a micro controller (uC) based sensor module with corresponding modem for communications through already existing DC plant power wiring as data transmission lines. This will reduce drastically both hardware and transmission lines costs.", "title": "" }, { "docid": "b250ac830e1662252069cc85128358a7", "text": "Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.", "title": "" }, { "docid": "da7beedfca8e099bb560120fc5047399", "text": "OBJECTIVE\nThis study aims to assess the relationship of late-night cell phone use with sleep duration and quality in a sample of Iranian adolescents.\n\n\nMETHODS\nThe study population consisted of 2400 adolescents, aged 12-18 years, living in Isfahan, Iran. Age, body mass index, sleep duration, cell phone use after 9p.m., and physical activity were documented. For sleep assessment, the Pittsburgh Sleep Quality Index questionnaire was used.\n\n\nRESULTS\nThe participation rate was 90.4% (n=2257 adolescents). The mean (SD) age of participants was 15.44 (1.55) years; 1270 participants reported to use cell phone after 9p.m. Overall, 56.1% of girls and 38.9% of boys reported poor quality sleep, respectively. Wake-up time was 8:17 a.m. (2.33), among late-night cell phone users and 8:03a.m. (2.11) among non-users. Most (52%) late-night cell phone users had poor sleep quality. Sedentary participants had higher sleep latency than their peers. Adjusted binary and multinomial logistic regression models showed that late-night cell users were 1.39 times more likely to have a poor sleep quality than non-users (p-value<0.001).\n\n\nCONCLUSION\nLate-night cell phone use by adolescents was associated with poorer sleep quality. Participants who were physically active had better sleep quality and quantity. As part of healthy lifestyle recommendations, avoidance of late-night cell phone use should be encouraged in adolescents.", "title": "" }, { "docid": "341f04892cc9f965abca32458b67f63c", "text": "In this paper, two single fed low-profile cavity-backed planar slot antennas for circular polarization (CP) applications are first introduced by half mode substrate integrated waveguide (HMSIW) technique. One of the structures presents right handed CP (RHCP), while the other one offers left handed CP (LHCP). A single layer of low cost printed circuit board (PCB) is employed for both antennas providing low-cost, lightweight, and also easy integration with planar circuits. An inset microstrip line is used to excite two orthogonal quarter-wave length patch modes with required phase difference for generating CP wave. The new proposed antennas are successfully designed and fabricated. Measured results are in good agreement with those obtained by numerical investigation using HFSS. Results exhibit that both antennas present the advantages of conventional cavity backed antennas including high gain and high front to back ratio (FTBR).", "title": "" }, { "docid": "7842e5c7ad3dc11d9d53b360e4e2691a", "text": "It is becoming obvious that all cancers have a defe ctiv p53 pathway, either through TP53 mutation or deregulation of the tumor suppressor function of the wild type TP53 . In this study we examined the expression of P53 and Caspase 3 in transperitoneally injected Ehrlich As cite carcinoma cells (EAC) treated with Tetrodotoxin in the liver of adult mice in order to evaluate the po ssible pro apoptotic effect of Tetrodotoxin . Results: Early in the treatment, num erous EAC detected in the large blood vessels & cen tral veins and expressed both of P53 & Caspase 3 in contrast to the late absence of P53 expressing EAC at the 12 th day of Tetrodotoxin treatment. In the same context , predominantly the perivascular hepatocytes expresse d Caspase 3 in contrast to the more diffuse express ion pattern late with Tetrodotoxin treatment. Non of the hepatocytes ever expressed P5 3 neither with early nor late Tetrodotoxin treatmen t. Conclusion: Tetrodotoxin therapy has a proapoptotic effect on Ehrlich Ascites carcin oma Cells (EAC). This may be through enhancing the tumor suppressor function of the wild type TP53 with subsequent Caspase 3 activation .", "title": "" }, { "docid": "9b98e43825bd36736c7c87bb2cee5a8c", "text": "Corresponding Author: Daniel Strmečki Faculty of Organization and Informatics, Pavlinska 2, 42000 Varaždin, Croatia Email: danstrmecki@gmail.com Abstract: Gamification is the usage of game mechanics, dynamics, aesthetics and game thinking in non-game systems. Its main objective is to increase user’s motivation, experience and engagement. For the same reason, it has started to penetrate in e-learning systems. However, when using gamified design elements in e-learning, we must consider various types of learners. In the phases of analysis and design of such elements, the cooperation of education, technology, pedagogy, design and finance experts is required. This paper discusses the development phases of introducing gamification into e-learning systems, various gamification design elements and their suitability for usage in e-learning systems. Several gamified design elements are found suited for e-learning (including points, badges, trophies, customization, leader boards, levels, progress tracking, challenges, feedback, social engagement loops and the freedom to fail). Advices for the usage of each of those elements in e-learning systems are also provided in this study. Based on those advises and the identified phases of introducing gamification info e-learning systems, we conducted an experimental study to investigate the effectiveness of gamification of an informatics online course. Results showed that students enrolled in the gamified version of the online module achieved greater learning success. Positive results encourage us to investigate the gamification of online learning content for other topics and courses. We also encourage more research on the influence of specific gamified design elements on learner’s motivation and engagement.", "title": "" }, { "docid": "4c5ac799c97f99d3a64bcbea6b6cb88d", "text": "This paper presents a new type of monolithic microwave integrated circuit (MMIC)-based active quasi-circulator using phase cancellation and combination techniques for simultaneous transmit and receive (STAR) phased-array applications. The device consists of a passive core of three quadrature hybrids and active components to provide active quasi-circulation operation. The core of three quadrature hybrids can be implemented using Lange couplers. The device is capable of high isolation performance, high-frequency operation, broadband performance, and improvement of the noise figure (NF) at the receive port by suppressing transmit noise. For passive quasi-circulation operation, the device can achieve 35-dB isolation between the transmit and receive port with 2.6-GHz bandwidth (BW) and insertion loss of 4.5 dB at X-band. For active quasi-operation, the device is shown to have 2.3-GHz BW of 30-dB isolation with 1.5-dB transmit-to-antenna gain and 4.7-dB antenna-to-receive insertion loss, while the NF at the receive port is approximately 5.5 dB. The device is capable of a power stress test up to 34 dBm at the output ports at 10.5 GHz. For operation with typical 25-dB isolation, the device is capable of operation up to 5.6-GHz BW at X-band. The device is also shown to be operable up to W -band by simulation with ~15-GHz BW of 20-dB isolation. The proposed architecture is suitable for MMIC integration and system-on-chip applications.", "title": "" }, { "docid": "dde9424652393fa66350ec6510c20e97", "text": "Framed under a cognitive approach to task-based L2 learning, this study used a pedagogical approach to investigate the effects of three vocabulary lessons (one traditional and two task-based) on acquisition of basic meanings, forms and morphological aspects of Spanish words. Quantitative analysis performed on the data suggests that the type of pedagogical approach had no impact on immediate retrieval (after treatment) of targeted word forms, but it had an impact on long-term retrieval (one week) of targeted forms. In particular, task-based lessons seemed to be more effective than the Presentation, Practice and Production (PPP) lesson. The analysis also suggests that a task-based lesson with an explicit focus-on-forms component was more effective than a task-based lesson that did not incorporate this component in promoting acquisition of word morphological aspects. The results also indicate that the explicit focus on forms component may be more effective when placed at the end of the lesson, when meaning has been acquired. Results are explained in terms of qualitative differences in amounts of focus on form and meaning, type of form-focused instruction provided, and opportunities for on-line targeted output retrieval. The findings of this study provide evidence for the value of a proactive (Doughty and Williams, 1998a) form-focused approach to Task-Based L2 vocabulary learning, especially structure-based production tasks (Ellis, 2003). Overall, they suggest an important role of pedagogical tasks in teaching L2 vocabulary.", "title": "" }, { "docid": "6436f0137e5dbc3fb3dac031ddb93629", "text": "Perovskite solar cells based on organometal halide light absorbers have been considered a promising photovoltaic technology due to their superb power conversion efficiency (PCE) along with very low material costs. Since the first report on a long-term durable solid-state perovskite solar cell with a PCE of 9.7% in 2012, a PCE as high as 19.3% was demonstrated in 2014, and a certified PCE of 17.9% was shown in 2014. Such a high photovoltaic performance is attributed to optically high absorption characteristics and balanced charge transport properties with long diffusion lengths. Nevertheless, there are lots of puzzles to unravel the basis for such high photovoltaic performances. The working principle of perovskite solar cells has not been well established by far, which is the most important thing for understanding perovksite solar cells. In this review, basic fundamentals of perovskite materials including opto-electronic and dielectric properties are described to give a better understanding and insight into high-performing perovskite solar cells. In addition, various fabrication techniques and device structures are described toward the further improvement of perovskite solar cells.", "title": "" }, { "docid": "ddab10d66473ac7c4de26e923bf59083", "text": "Phased arrays allow electronic scanning of the antenna beam. However, these phased arrays are not widely used due to a high implementation cost. This article discusses the advantages of the RF architecture and the implementation of silicon RFICs for phased-array transmitters/receivers. In addition, this work also demonstrates how silicon RFICs can play a vital role in lowering the cost of phased arrays.", "title": "" }, { "docid": "953e70084692643648e6f489aa1e761e", "text": "To successfully select and implement nudges, policy makers need a psychological understanding of who opposes nudges, how they are perceived, and when alternative methods (e.g., forced choice) might work better. Using two representative samples, we examined four factors that influence U.S. attitudes toward nudges – types of nudges, individual dispositions, nudge perceptions, and nudge frames. Most nudges were supported, although opt-out defaults for organ donations were opposed in both samples. “System 1” nudges (e.g., defaults and sequential orderings) were viewed less favorably than “System 2” nudges (e.g., educational opportunities or reminders). System 1 nudges were perceived as more autonomy threatening, whereas System 2 nudges were viewed as more effective for better decision making and more necessary for changing behavior. People with greater empathetic concern tended to support both types of nudges and viewed them as the “right” kind of goals to have. Individualists opposed both types of nudges, and conservatives tended to oppose both types. Reactant people and those with a strong desire for control opposed System 1 nudges. To see whether framing could influence attitudes, we varied the description of the nudge in terms of the target (Personal vs. Societal) and the reference point for the nudge (Costs vs. Benefits). Empathetic people were more supportive when framing highlighted societal costs or benefits, and reactant people were more opposed to nudges when frames highlighted the personal costs of rejection.", "title": "" } ]
scidocsrr
530f8bf58b05f05dd09dd9df731e50bb
PACIS ) 2014 EXPLORING MOBILE PAYMENT ADOPTION IN CHINA
[ { "docid": "401e7ab4d97d7f0f113b8ca9ec1c91ce", "text": "The probability sampling techniques used for quantitative studies are rarely appropriate when conducting qualitative research. This article considers and explains the differences between the two approaches and describes three broad categories of naturalistic sampling: convenience, judgement and theoretical models. The principles are illustrated with practical examples from the author's own research.", "title": "" }, { "docid": "3e691cf6055eb564dedca955b816a654", "text": "Many Internet-based services have already been ported to the mobile-based environment, embracing the new services is therefore critical to deriving revenue for services providers. Based on a valence framework and trust transfer theory, we developed a trust-based customer decision-making model of the non-independent, third-party mobile payment services context. We empirically investigated whether a customer’s established trust in Internet payment services is likely to influence his or her initial trust in mobile payment services. We also examined how these trust beliefs might interact with both positive and negative valence factors and affect a customer’s adoption of mobile payment services. Our SEM analysis indicated that trust indeed had a substantial impact on the cross-environment relationship and, further, that trust in combination with the positive and negative valence determinants directly and indirectly influenced behavioral intention. In addition, the magnitudes of these effects on workers and students were significantly different from each other. 2011 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +86 27 8755 8100; fax: +86 27 8755 6437. E-mail addresses: luyb@mail.hust.edu.cn (Y. Lu), xtysq@smail.hust.edu.cn (S. Yang), Chau@business.hku.hk (Patrick Y.K. Chau), skysharecao@163.com (Y. Cao). 1 Tel.: +86 27 8755 6448. 2 Tel.: +852 2859 1025. 3 Tel.: +86 27 8755 8100.", "title": "" } ]
[ { "docid": "a1f1d34e8ceeb984976e45074694d4c2", "text": "This paper proposes a model of the doubly fed induction generator (DFIG) suitable for transient stability studies. The main assumption adopted in the model is that the current control loops, which are much faster than the electromechanic transients under study, do not have a significant influence on the transient stability of the power system and may be considered instantaneous. The proposed DFIG model is a set of algebraic equations which are solved using an iterative procedure. A method is also proposed to calculate the DFIG initial conditions. A detailed variable-speed windmill model has been developed using the proposed DFIG model. This windmill model has been integrated in a transient stability simulation program in order to demonstrate its feasibility. Several simulations have been performed using a base case which includes a small grid, a wind farm represented by a single windmill, and different operation points. The evolution of several electric variables during the simulations is shown and discussed.", "title": "" }, { "docid": "a62a23df11fd72522a3d9726b60d4497", "text": "In this paper, a simple single-phase grid-connected photovoltaic (PV) inverter topology consisting of a boost section, a low-voltage single-phase inverter with an inductive filter, and a step-up transformer interfacing the grid is considered. Ideally, this topology will not inject any lower order harmonics into the grid due to high-frequency pulse width modulation operation. However, the nonideal factors in the system such as core saturation-induced distorted magnetizing current of the transformer and the dead time of the inverter, etc., contribute to a significant amount of lower order harmonics in the grid current. A novel design of inverter current control that mitigates lower order harmonics is presented in this paper. An adaptive harmonic compensation technique and its design are proposed for the lower order harmonic compensation. In addition, a proportional-resonant-integral (PRI) controller and its design are also proposed. This controller eliminates the dc component in the control system, which introduces even harmonics in the grid current in the topology considered. The dynamics of the system due to the interaction between the PRI controller and the adaptive compensation scheme is also analyzed. The complete design has been validated with experimental results and good agreement with theoretical analysis of the overall system is observed.", "title": "" }, { "docid": "9a19caf553338e950c89f5f670016f50", "text": "Countering distributed denial of service (DDoS) attacks is becoming ever more challenging with the vast resources and techniques increasingly available to attackers. In this paper, we consider sophisticated attacks that are protocol-compliant, non-intrusive, and utilize legitimate application-layer requests to overwhelm system resources. We characterize application-layer resource attacks as either request flooding, asymmetric, or repeated one-shot, on the basis of the application workload parameters that they exploit. To protect servers from these attacks, we propose a counter-mechanism namely DDoS Shield that consists of a suspicion assignment mechanism and a DDoS-resilient scheduler. In contrast to prior work, our suspicion mechanism assigns a continuous value as opposed to a binary measure to each client session, and the scheduler utilizes these values to determine if and when to schedule a session's requests. Using testbed experiments on a web application, we demonstrate the potency of these resource attacks and evaluate the efficacy of our counter-mechanism. For instance, we mount an asymmetric attack which overwhelms the server resources, increasing the response time of legitimate clients from 0.3 seconds to 40 seconds. Under the same attack scenario, DDoS Shield improves the victims' performance to 1.5 seconds.", "title": "" }, { "docid": "72cff051b5d2bcd8eaf41b6e9ae9eca9", "text": "We propose a new method for detecting patterns of anomalies in categorical datasets. We assume that anomalies are generated by some underlying process which affects only a particular subset of the data. Our method consists of two steps: we first use a \"local anomaly detector\" to identify individual records with anomalous attribute values, and then detect patterns where the number of anomalous records is higher than expected. Given the set of anomalies flagged by the local anomaly detector, we search over all subsets of the data defined by any set of fixed values of a subset of the attributes, in order to detect self-similar patterns of anomalies. We wish to detect any such subset of the test data which displays a significant increase in anomalous activity as compared to the normal behavior of the system (as indicated by the training data). We perform significance testing to determine if the number of anomalies in any subset of the test data is significantly higher than expected, and propose an efficient algorithm to perform this test over all such subsets of the data. We show that this algorithm is able to accurately detect anomalous patterns in real-world hospital, container shipping and network intrusion data.", "title": "" }, { "docid": "775e78af608c07853af2e2c31a59bf5c", "text": "This investigation compared the effect of high-volume (VOL) versus high-intensity (INT) resistance training on stimulating changes in muscle size and strength in resistance-trained men. Following a 2-week preparatory phase, participants were randomly assigned to either a high-volume (VOL; n = 14, 4 × 10-12 repetitions with ~70% of one repetition maximum [1RM], 1-min rest intervals) or a high-intensity (INT; n = 15, 4 × 3-5 repetitions with ~90% of 1RM, 3-min rest intervals) training group for 8 weeks. Pre- and posttraining assessments included lean tissue mass via dual energy x-ray absorptiometry, muscle cross-sectional area and thickness of the vastus lateralis (VL), rectus femoris (RF), pectoralis major, and triceps brachii muscles via ultrasound images, and 1RM strength in the back squat and bench press (BP) exercises. Blood samples were collected at baseline, immediately post, 30 min post, and 60 min postexercise at week 3 (WK3) and week 10 (WK10) to assess the serum testosterone, growth hormone (GH), insulin-like growth factor-1 (IGF1), cortisol, and insulin concentrations. Compared to VOL, greater improvements (P < 0.05) in lean arm mass (5.2 ± 2.9% vs. 2.2 ± 5.6%) and 1RM BP (14.8 ± 9.7% vs. 6.9 ± 9.0%) were observed for INT. Compared to INT, area under the curve analysis revealed greater (P < 0.05) GH and cortisol responses for VOL at WK3 and cortisol only at WK10. Compared to WK3, the GH and cortisol responses were attenuated (P < 0.05) for VOL at WK10, while the IGF1 response was reduced (P < 0.05) for INT. It appears that high-intensity resistance training stimulates greater improvements in some measures of strength and hypertrophy in resistance-trained men during a short-term training period.", "title": "" }, { "docid": "a72c9eb8382d3c94aae77fa4eadd1df8", "text": "Techniques for identifying the author of an unattributed document can be applied to problems in information analysis and in academic scholarship. A range of methods have been proposed in the research literature, using a variety of features and machine learning approaches, but the methods have been tested on very different data and the results cannot be compared. It is not even clear whether the differences in performance are due to feature selection or other variables. In this paper we examine the use of a large publicly available collection of newswire articles as a benchmark for comparing authorship attribution methods. To demonstrate the value of having a benchmark, we experimentally compare several recent feature-based techniques for authorship attribution, and test how well these methods perform as the volume of data is increased. We show that the benchmark is able to clearly distinguish between different approaches, and that the scalability of the best methods based on using function words features is acceptable, with only moderate decline as the difficulty of the problem is increased.", "title": "" }, { "docid": "5a248466c2e82b8453baa483a05bc25b", "text": "Early severe stress and maltreatment produces a cascade of neurobiological events that have the potential to cause enduring changes in brain development. These changes occur on multiple levels, from neurohumoral (especially the hypothalamic-pituitary-adrenal [HPA] axis) to structural and functional. The major structural consequences of early stress include reduced size of the mid-portions of the corpus callosum and attenuated development of the left neocortex, hippocampus, and amygdala. Major functional consequences include increased electrical irritability in limbic structures and reduced functional activity of the cerebellar vermis. There are also gender differences in vulnerability and functional consequences. The neurobiological sequelae of early stress and maltreatment may play a significant role in the emergence of psychiatric disorders during development.", "title": "" }, { "docid": "aa234355d0b0493e1d8c7a04e7020781", "text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.", "title": "" }, { "docid": "88f643b2bd917e47e5173d34744c4b20", "text": "Large image datasets such as ImageNet or open-ended photo websites like Flickr are revealing new challenges to image classification that were not apparent in smaller, fixed sets. In particular, the efficient handling of dynamically growing datasets, where not only the amount of training data but also the number of classes increases over time, is a relatively unexplored problem. In this challenging setting, we study how two variants of Random Forests (RF) perform under four strategies to incorporate new classes while avoiding to retrain the RFs from scratch. The various strategies account for different trade-offs between classification accuracy and computational efficiency. In our extensive experiments, we show that both RF variants, one based on Nearest Class Mean classifiers and the other on SVMs, outperform conventional RFs and are well suited for incrementally learning new classes. In particular, we show that RFs initially trained with just 10 classes can be extended to 1,000 classes with an acceptable loss of accuracy compared to training from the full data and with great computational savings compared to retraining for each new batch of classes.", "title": "" }, { "docid": "bbfdc30b412df84861e242d4305ca20d", "text": "OBJECTIVES\nLocal anesthetic injection into the interspace between the popliteal artery and the posterior capsule of the knee (IPACK) has the potential to provide motor-sparing analgesia to the posterior knee after total knee arthroplasty. The primary objective of this cadaveric study was to evaluate injectate spread to relevant anatomic structures with IPACK injection.\n\n\nMETHODS\nAfter receipt of Institutional Review Board Biospecimen Subcommittee approval, IPACK injection was performed on fresh-frozen cadavers. The popliteal fossa in each specimen was dissected and examined for injectate spread.\n\n\nRESULTS\nTen fresh-frozen cadaver knees were included in the study. Injectate was observed to spread in the popliteal fossa at a mean ± SD of 6.1 ± 0.7 cm in the medial-lateral dimension and 10.1 ± 3.2 cm in the proximal-distal dimension. No injectate was noted to be in contact with the proximal segment of the sciatic nerve, but 3 specimens showed injectate spread to the tibial nerve. In 3 specimens, the injectate showed possible contact with the common peroneal nerve. The middle genicular artery was consistently surrounded by injectate.\n\n\nCONCLUSIONS\nThis cadaver study of IPACK injection demonstrated spread throughout the popliteal fossa without proximal sciatic involvement. However, the potential for injectate to spread to the tibial or common peroneal nerve was demonstrated. Consistent surrounding of the middle genicular artery with injectate suggests a potential mechanism of analgesia for the IPACK block, due to the predictable relationship between articular sensory nerves and this artery. Further study is needed to determine the ideal site of IPACK injection.", "title": "" }, { "docid": "09b273c9e77f6fc1b2de20f50227c44d", "text": "Age and gender are complementary soft biometric traits for face recognition. Successful estimation of age and gender from facial images taken under real-world conditions can contribute improving the identification results in the wild. In this study, in order to achieve robust age and gender classification in the wild, we have benefited from Deep Convolutional Neural Networks based representation. We have explored transferability of existing deep convolutional neural network (CNN) models for age and gender classification. The generic AlexNet-like architecture and domain specific VGG-Face CNN model are employed and fine-tuned with the Adience dataset prepared for age and gender classification in uncontrolled environments. In addition, task specific GilNet CNN model has also been utilized and used as a baseline method in order to compare with transferred models. Experimental results show that both transferred deep CNN models outperform the GilNet CNN model, which is the state-of-the-art age and gender classification approach on the Adience dataset, by an absolute increase of 7% and 4.5% in accuracy, respectively. This outcome indicates that transferring a deep CNN model can provide better classification performance than a task specific CNN model, which has a limited number of layers and trained from scratch using a limited amount of data as in the case of GilNet. Domain specific VGG-Face CNN model has been found to be more useful and provided better performance for both age and gender classification tasks, when compared with generic AlexNet-like model, which shows that transfering from a closer domain is more useful.", "title": "" }, { "docid": "743424b3b532b16f018e92b2563458d5", "text": "We consider the problem of finding a few representatives for a dataset, i.e., a subset of data points that efficiently describes the entire dataset. We assume that each data point can be expressed as a linear combination of the representatives and formulate the problem of finding the representatives as a sparse multiple measurement vector problem. In our formulation, both the dictionary and the measurements are given by the data matrix, and the unknown sparse codes select the representatives via convex optimization. In general, we do not assume that the data are low-rank or distributed around cluster centers. When the data do come from a collection of low-rank models, we show that our method automatically selects a few representatives from each low-rank model. We also analyze the geometry of the representatives and discuss their relationship to the vertices of the convex hull of the data. We show that our framework can be extended to detect and reject outliers in datasets, and to efficiently deal with new observations and large datasets. The proposed framework and theoretical foundations are illustrated with examples in video summarization and image classification using representatives.", "title": "" }, { "docid": "265bf26646113a56101c594f563cb6dc", "text": "A system, particularly a decision-making concept, that facilitates highly automated driving on freeways in real traffic is presented. The system is capable of conducting fully automated lane change (LC) maneuvers with no need for driver approval. Due to the application in real traffic, a robust functionality and the general safety of all traffic participants are among the main requirements. Regarding these requirements, the consideration of measurement uncertainties demonstrates a major challenge. For this reason, a fully integrated probabilistic concept is developed. By means of this approach, uncertainties are regarded in the entire process of determining driving maneuvers. While this also includes perception tasks, this contribution puts a focus on the driving strategy and the decision-making process for the execution of driving maneuvers. With this approach, the BMW Group Research and Technology managed to drive 100% automated in real traffic on the freeway A9 from Munich to Ingolstadt, showing a robust, comfortable, and safe driving behavior, even during multiple automated LC maneuvers.", "title": "" }, { "docid": "3b6b746f4467fd53ade1d6d2798c45b7", "text": "We present a new deep learning architecture (called Kdnetwork) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and shares parameters of these transformations according to the subdivisions of the point clouds imposed onto them by kdtrees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform twodimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behavior. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation.", "title": "" }, { "docid": "3afa34f0420e422cfe1b3d61abad5e7f", "text": "One of the many challenges in designing autonomy for operation in uncertain and dynamic environments is the planning of collision-free paths. Roadmap-based motion planning is a popular technique for identifying collision-free paths, since it approximates the often infeasible space of all possible motions with a networked structure of valid configurations. We use stochastic reachable sets to identify regions of low collision probability, and to create roadmaps which incorporate likelihood of collision. We complete a small number of stochastic reachability calculations with individual obstacles a priori. This information is then associated with the weight, or preference for traversal, given to a transition in the roadmap structure. Our method is novel, and scales well with the number of obstacles, maintaining a relatively high probability of reaching the goal in a finite time horizon without collision, as compared to other methods. We demonstrate our method on systems with up to 50 dynamic obstacles.", "title": "" }, { "docid": "9d5593d89a206ac8ddb82921c2a68c43", "text": "This paper presents an automatic traffic surveillance system to estimate important traffic parameters from video sequences using only one camera. Different from traditional methods that can classify vehicles to only cars and noncars, the proposed method has a good ability to categorize vehicles into more specific classes by introducing a new \"linearity\" feature in vehicle representation. In addition, the proposed system can well tackle the problem of vehicle occlusions caused by shadows, which often lead to the failure of further vehicle counting and classification. This problem is solved by a novel line-based shadow algorithm that uses a set of lines to eliminate all unwanted shadows. The used lines are devised from the information of lane-dividing lines. Therefore, an automatic scheme to detect lane-dividing lines is also proposed. The found lane-dividing lines can also provide important information for feature normalization, which can make the vehicle size more invariant, and thus much enhance the accuracy of vehicle classification. Once all features are extracted, an optimal classifier is then designed to robustly categorize vehicles into different classes. When recognizing a vehicle, the designed classifier can collect different evidences from its trajectories and the database to make an optimal decision for vehicle classification. Since more evidences are used, more robustness of classification can be achieved. Experimental results show that the proposed method is more robust, accurate, and powerful than other traditional methods, which utilize only the vehicle size and a single frame for vehicle classification.", "title": "" }, { "docid": "04647771810ac62b27ee8da12833a02d", "text": "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.", "title": "" }, { "docid": "2b32087daf5c104e60f91ebf19cd744d", "text": "A large amount of food photos are taken in restaurants for diverse reasons. This dish recognition problem is very challenging, due to different cuisines, cooking styles and the intrinsic difficulty of modeling food from its visual appearance. Contextual knowledge is crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and geolocation of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then we reformulate the problem using a probabilistic model connecting dishes, restaurants and geolocations. We apply that model in three different tasks: dish recognition, restaurant recognition and geolocation refinement. Experiments on a dataset including 187 restaurants and 701 dishes show that combining multiple evidences (visual, geolocation, and external knowledge) can boost the performance in all tasks.", "title": "" }, { "docid": "da19fd683e64b0192bd52eadfade33a2", "text": "For professional users such as firefighters and other first responders, GNSS positioning technology (GPS, assisted GPS) can satisfy outdoor positioning requirements in many instances. However, there is still a need for high-performance deep indoor positioning for use by these same professional users. This need has already been clearly expressed by various communities of end users in the context of WearIT@Work, an R&D project funded by the European Community's Sixth Framework Program. It is known that map matching can help for indoor pedestrian navigation. In most previous research, it was assumed that detailed building plans are available. However, in many emergency / rescue scenarios, only very limited building plan information may be at hand. For example a building outline might be obtained from aerial photographs or cataster databases. Alternatively, an escape plan posted at the entrances to many building would yield only approximate exit door and stairwell locations as well as hallway and room orientation. What is not known is how much map information is really required for a USAR mission and how much each level of map detail might help to improve positioning accuracy. Obviously, the geometry of the building and the course through will be factors consider. The purpose of this paper is to show how a previously published Backtracking Particle Filter (BPF) can be combined with different levels of building plan detail to improve PDR performance. A new in/out scenario that might be typical of a reconnaissance mission during a fire in a two-story office building was evaluated. Using only external wall information, the new scenario yields positioning performance (2.56 m mean 2D error) that is greatly superior to the PDR-only, no map base case (7.74 m mean 2D error). This result has a substantial practical significance since this level of building plan detail could be quickly and easily generated in many emergency instances. The technique could be used to mitigate heading errors that result from exposing the IMU to extreme operating conditions. It is hoped that this mitigating effect will also occur for more irregular paths and in larger traversed spaces such as parking garages and warehouses.", "title": "" }, { "docid": "f4009fde2b4ac644d3b83b664e178b5f", "text": "This chapter describes the history of metaheuristics in five distinct periods, starting long before the first use of the term and ending a long time in the future.", "title": "" } ]
scidocsrr
a77f69045f4fb1cd9df339bb888672cd
ASR-based Features for Emotion Recognition: A Transfer Learning Approach
[ { "docid": "33bee298704171e68e413e875e413af3", "text": "We introduce multiplicative LSTM (mLSTM), a novel recurrent neural network architecture for sequence modelling that combines the long short-term memory (LSTM) and multiplicative recurrent neural network architectures. mLSTM is characterised by its ability to have different recurrent transition functions for each possible input, which we argue makes it more expressive for autoregressive density estimation. We demonstrate empirically that mLSTM outperforms standard LSTM and its deep variants for a range of character level modelling tasks, and that this improvement increases with the complexity of the task. This model achieves a test error of 1.19 bits/character on the last 4 million characters of the Hutter prize dataset when combined with dynamic evaluation.", "title": "" }, { "docid": "9d672a1d45bfd078c16915b7f5d949b0", "text": "To design a useful recommender system, it is important to understand how products relate to each other. For example, while a user is browsing mobile phones, it might make sense to recommend other phones, but once they buy a phone, we might instead want to recommend batteries, cases, or chargers. In economics, these two types of recommendations are referred to as substitutes and complements: substitutes are products that can be purchased instead of each other, while complements are products that can be purchased in addition to each other. Such relationships are essential as they help us to identify items that are relevant to a user's search.\n Our goal in this paper is to learn the semantics of substitutes and complements from the text of online reviews. We treat this as a supervised learning problem, trained using networks of products derived from browsing and co-purchasing logs. Methodologically, we build topic models that are trained to automatically discover topics from product reviews that are successful at predicting and explaining such relationships. Experimentally, we evaluate our system on the Amazon product catalog, a large dataset consisting of 9 million products, 237 million links, and 144 million reviews.", "title": "" }, { "docid": "3f5eed1f718e568dc3ba9abbcd6bfedd", "text": "The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of `context-aware' emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database.", "title": "" } ]
[ { "docid": "3c66777d5f6c88c9e2881df4fb7783e6", "text": "Large-scale Internet of Things (IoT) services such as healthcare, smart cities, and marine monitoring are pervasive in cyber-physical environments strongly supported by Internet technologies and fog computing. Complex IoT services are increasingly composed of sensors, devices, and compute resources within fog computing infrastructures. The orchestration of such applications can be leveraged to alleviate the difficulties of maintenance and enhance data security and system reliability. However, efficiently dealing with dynamic variations and transient operational behavior is a crucial challenge within the context of choreographing complex services. Furthermore, with the rapid increase of the scale of IoT deployments, the heterogeneity, dynamicity, and uncertainty within fog environments and increased computational complexity further aggravate this challenge. This article gives an overview of the core issues, challenges, and future research directions in fog-enabled orchestration for IoT services. Additionally, it presents early experiences of an orchestration scenario, demonstrating the feasibility and initial results of using a distributed genetic algorithm in this context.", "title": "" }, { "docid": "4f747c2fb562be4608d1f97ead32e00b", "text": "With rapid development of the Internet, the web contents become huge. Most of the websites are publicly available and anyone can access the contents everywhere such as workplace, home and even schools. Nevertheless, not all the web contents are appropriate for all users, especially children. An example of these contents is pornography images which should be restricted to certain age group. Besides, these images are not safe for work (NSFW) in which employees should not be seen accessing such contents. Recently, convolutional neural networks have been successfully applied to many computer vision problems. Inspired by these successes, we propose a mixture of convolutional neural networks for adult content recognition. Unlike other works, our method is formulated on a weighted sum of multiple deep neural network models. The weights of each CNN models are expressed as a linear regression problem learnt using Ordinary Least Squares (OLS). Experimental results demonstrate that the proposed model outperforms both single CNN model and the average sum of CNN models in adult content recognition.", "title": "" }, { "docid": "5f068a11901763af752df9480b97e0c0", "text": "Beginning with a brief review of CMOS scaling trends from 1 m to 0.1 m, this paper examines the fundamental factors that will ultimately limit CMOS scaling and considers the design issues near the limit of scaling. The fundamental limiting factors are electron thermal energy, tunneling leakage through gate oxide, and 2D electrostatic scale length. Both the standby power and the active power of a processor chip will increase precipitously below the 0.1m or 100-nm technology generation. To extend CMOS scaling to the shortest channel length possible while still gaining significant performance benefit, an optimized, vertically and laterally nonuniform doping design (superhalo) is presented. It is projected that room-temperature CMOS will be scaled to 20-nm channel length with the superhalo profile. Low-temperature CMOS allows additional design space to further extend CMOS scaling to near 10 nm.", "title": "" }, { "docid": "846931a1e4c594626da26931110c02d6", "text": "A large volume of research has been conducted in the cognitive radio (CR) area the last decade. However, the deployment of a commercial CR network is yet to emerge. A large portion of the existing literature does not build on real world scenarios, hence, neglecting various important aspects of commercial telecommunication networks. For instance, a lot of attention has been paid to spectrum sensing as the front line functionality that needs to be completed in an efficient and accurate manner to enable an opportunistic CR network architecture. While on the one hand it is necessary to detect the existence of spectrum holes, on the other hand, simply sensing (cooperatively or not) the energy emitted from a primary transmitter cannot enable correct dynamic spectrum access. For example, the presence of a primary transmitter's signal does not mean that CR network users cannot access the spectrum since there might not be any primary receiver in the vicinity. Despite the existing solutions to the DSA problem no robust, implementable scheme has emerged. The set of assumptions that these schemes are built upon do not always hold in realistic, wireless environments. Specific settings are assumed, which differ significantly from how existing telecommunication networks work. In this paper, we challenge the basic premises of the proposed schemes. We further argue that addressing the technical challenges we face in deploying robust CR networks can only be achieved if we radically change the way we design their basic functionalities. In support of our argument, we present a set of real-world scenarios, inspired by realistic settings in commercial telecommunications networks, namely TV and cellular, focusing on spectrum sensing as a basic and critical functionality in the deployment of CRs. We use these scenarios to show why existing DSA paradigms are not amenable to realistic deployment in complex wireless environments. The proposed study extends beyond cognitive radio networks, and further highlights the often existing gap between research and commercialization, paving the way to new thinking about how to accelerate commercialization and adoption of new networking technologies and services.", "title": "" }, { "docid": "8607b42b5c5ee1d535794390e06eb1bf", "text": "Quantitative association rule (QAR) mining has been recognized an influential research problem over the last decade due to the popularity of quantitative databases and the usefulness of association rules in real life. Unlike boolean association rules (BARs), which only consider boolean attributes, QARs consist of quantitative attributes which contain much richer information than the boolean attributes. However, the combination of these quantitative attributes and their value intervals always gives rise to the generation of an explosively large number of itemsets, thereby severely degrading the mining efficiency. In this paper, we propose an information-theoretic approach to avoid unrewarding combinations of both the attributes and their value intervals being generated in the mining process. We study the mutual information between the attributes in a quantitative database and devise a normalization on the mutual information to make it applicable in the context of QAR mining. To indicate the strong informative relationships among the attributes, we construct a mutual information graph (MI graph), whose edges are attribute pairs that have normalized mutual information no less than a predefined information threshold. We find that the cliques in the MI graph represent a majority of the frequent itemsets. We also show that frequent itemsets that do not form a clique in the MI graph are those whose attributes are not informatively correlated to each other. By utilizing the cliques in the MI graph, we devise an efficient algorithm that significantly reduces the number of value intervals of the attribute sets to be joined during the mining process. Extensive experiments show that our algorithm speeds up the mining process by up to two orders of magnitude. Most importantly, we are able to obtain most of the high-confidence QARs, whereas the QARs that are not returned by MIC are shown to be less interesting.", "title": "" }, { "docid": "a7683aa1cdb5cec5c00de191463acd8b", "text": "A novel PN diode decoding method for 3D NAND Flash is proposed. The PN diodes are fabricated self-aligned at the source side of the Vertical Gate (VG) 3D NAND architecture. Contrary to the previous 3D NAND approaches, there is no need to fabricate plural string select (SSL) transistors inside the array, thus enabling a highly symmetrical and scalable cell structure. A novel three-step programming pulse waveform is integrated to implement the program-inhibit method, capitalizing on that the PN diodes can prevent leakage of the self-boosted channel potential. A large program-disturb-free window >5V is demonstrated.", "title": "" }, { "docid": "f6167a74c881d16faaf8fb4e804191e2", "text": "Automation, machine learning, and artificial intelligence (AI) are changing the landscape of echocardiography providing complimentary tools to physicians to enhance patient care. Multiple vendor software programs have incorporated automation to improve accuracy and efficiency of manual tracings. Automation with longitudinal strain and 3D echocardiography has shown great accuracy and reproducibility allowing the incorporation of these techniques into daily workflow. This will give further experience to nonexpert readers and allow the integration of these essential tools into more echocardiography laboratories. The potential for machine learning in cardiovascular imaging is still being discovered as algorithms are being created, with training on large data sets beyond what traditional statistical reasoning can handle. Deep learning when applied to large image repositories will recognize complex relationships and patterns integrating all properties of the image, which will unlock further connections about the natural history and prognosis of cardiac disease states. The purpose of this review article was to describe the role and current use of automation, machine learning, and AI in echocardiography and discuss potential limitations and challenges of in the future.", "title": "" }, { "docid": "56ec8f3e88731992a028a9322dbc4890", "text": "The term knowledge visualization has been used in many different fields with many different definitions. In this paper, we propose a new definition of knowledge visualization specifically in the context of visual analysis and reasoning. Our definition begins with the differentiation of knowledge as either explicit and tacit knowledge. We then present a model for the relationship between the two through the use visualization. Instead of directly representing data in a visualization, we first determine the value of the explicit knowledge associated with the data based on a cost/benefit analysis and display the knowledge in accordance to its importance. We propose that the displayed explicit knowledge leads us to create our own tacit knowledge through visual analytical reasoning and discovery.", "title": "" }, { "docid": "a701b681b5fb570cf8c0668fe691ee15", "text": "Coagulation-flocculation is a relatively simple physical-chemical technique in treatment of old and stabilized leachate which has been practiced using a variety of conventional coagulants. Polymeric forms of metal coagulants which are increasingly applied in water treatment are not well documented in leachate treatment. In this research, capability of poly-aluminum chloride (PAC) in the treatment of stabilized leachate from Pulau Burung Landfill Site (PBLS), Penang, Malaysia was studied. The removal efficiencies for chemical oxygen demand (COD), turbidity, color and total suspended solid (TSS) obtained using PAC were compared with those obtained using alum as a conventional coagulant. Central composite design (CCD) and response surface method (RSM) were applied to optimize the operating variables viz. coagulant dosage and pH. Quadratic models developed for the four responses (COD, turbidity, color and TSS) studied indicated the optimum conditions to be PAC dosage of 2g/L at pH 7.5 and alum dosage of 9.5 g/L at pH 7. The experimental data and model predictions agreed well. COD, turbidity, color and TSS removal efficiencies of 43.1, 94.0, 90.7, and 92.2% for PAC, and 62.8, 88.4, 86.4, and 90.1% for alum were demonstrated.", "title": "" }, { "docid": "8b7715a1a7d9d668e52a8f2bd90c89fa", "text": "A 275mm2 network-on-chip architecture contains 80 tiles arranged as a 10 times 8 2D array of floating-point cores and packet-switched routers, operating at 4GHz. The 15-F04 design employs mesochronous clocking, fine-grained clock gating, dynamic sleep transistors, and body-bias techniques. The 65nm 100M transistor die is designed to achieve a peak performance of 1.0TFLOPS at 1V while dissipating 98W.", "title": "" }, { "docid": "65a87f693d78e69c01d812fef7e9e85a", "text": "MDPL has been proposed as a masked logic style that counteracts DPA attacks. Recently, it has been shown that the so-called “early propagation effect” might reduce the security of this logic style significantly. In the light of these findings, a 0.13 μm prototype chip that includes the implementation of an 8051-compatible microcontroller in MDPL has been analyzed. Attacks on the measured power traces of this implementation show a severe DPA leakage. In this paper, the results of a detailed analysis of the reasons for this leakage are presented. Furthermore, a proposal is made on how to improve MDPL with respect to the identified problems.", "title": "" }, { "docid": "ad131f6baec15a011252f484f1ef8f18", "text": "Recent studies have shown that Alzheimer's disease (AD) is related to alteration in brain connectivity networks. One type of connectivity, called effective connectivity, defined as the directional relationship between brain regions, is essential to brain function. However, there have been few studies on modeling the effective connectivity of AD and characterizing its difference from normal controls (NC). In this paper, we investigate the sparse Bayesian Network (BN) for effective connectivity modeling. Specifically, we propose a novel formulation for the structure learning of BNs, which involves one L1-norm penalty term to impose sparsity and another penalty to ensure the learned BN to be a directed acyclic graph - a required property of BNs. We show, through both theoretical analysis and extensive experiments on eleven moderate and large benchmark networks with various sample sizes, that the proposed method has much improved learning accuracy and scalability compared with ten competing algorithms. We apply the proposed method to FDG-PET images of 42 AD and 67 NC subjects, and identify the effective connectivity models for AD and NC, respectively. Our study reveals that the effective connectivity of AD is different from that of NC in many ways, including the global-scale effective connectivity, intra-lobe, inter-lobe, and inter-hemispheric effective connectivity distributions, as well as the effective connectivity associated with specific brain regions. These findings are consistent with known pathology and clinical progression of AD, and will contribute to AD knowledge discovery.", "title": "" }, { "docid": "fa4f9e00ae199f34f2c28cb56799c7e5", "text": "OBJECTIVE\nTo examine how concurrent partnerships amplify the rate of HIV spread, using methods that can be supported by feasible data collection.\n\n\nMETHODS\nA fully stochastic simulation is used to represent a population of individuals, the sexual partnerships that they form and dissolve over time, and the spread of an infectious disease. Sequential monogamy is compared with various levels of concurrency, holding all other features of the infection process constant. Effective summary measures of concurrency are developed that can be estimated on the basis of simple local network data.\n\n\nRESULTS\nConcurrent partnerships exponentially increase the number of infected individuals and the growth rate of the epidemic during its initial phase. For example, when one-half of the partnerships in a population are concurrent, the size of the epidemic after 5 years is 10 times as large as under sequential monogamy. The primary cause of this amplification is the growth in the number of people connected in the network at any point in time: the size of the largest \"component'. Concurrency increases the size of this component, and the result is that the infectious agent is no longer trapped in a monogamous partnership after transmission occurs, but can spread immediately beyond this partnership to infect others. The summary measure of concurrency developed here does a good job in predicting the size of the amplification effect, and may therefore be a useful and practical tool for evaluation and intervention at the beginning of an epidemic.\n\n\nCONCLUSION\nConcurrent partnerships may be as important as multiple partners or cofactor infections in amplifying the spread of HIV. The public health implications are that data must be collected properly to measure the levels of concurrency in a population, and that messages promoting one partner at a time are as important as messages promoting fewer partners.", "title": "" }, { "docid": "4927fee47112be3d859733c498fbf594", "text": "To design effective tools for detecting and recovering from software failures requires a deep understanding of software bug characteristics. We study software bug characteristics by sampling 2,060 real world bugs in three large, representative open-source projects—the Linux kernel, Mozilla, and Apache. We manually study these bugs in three dimensions—root causes, impacts, and components. We further study the correlation between categories in different dimensions, and the trend of different types of bugs. The findings include: (1) semantic bugs are the dominant root cause. As software evolves, semantic bugs increase, while memory-related bugs decrease, calling for more research effort to address semantic bugs; (2) the Linux kernel operating system (OS) has more concurrency bugs than its non-OS counterparts, suggesting more effort into detecting concurrency bugs in operating system code; and (3) reported security bugs are increasing, and the majority of them are caused by semantic bugs, suggesting more support to help developers diagnose and fix security bugs, especially semantic security bugs. In addition, to reduce the manual effort in building bug benchmarks for evaluating bug detection and diagnosis tools, we use machine learning techniques to classify 109,014 bugs automatically.", "title": "" }, { "docid": "80912c6ff371cdc47ef92e793f2497a0", "text": "Since the explosion of the Web as a business medium, one of its primary uses has been for marketing. Soon, the Web will become a critical distribution channel for the majority of successful enterprises. The mass media, consumer marketers and advertising agencies seem to be in the midst of Internet discovery and exploitation. Before a company can envision what might sell online in the coming years, it must ®rst understand the attitudes and behaviour of its potential customers. Hence, this study examines attitudes toward various aspects of online shopping and provides a better understanding of the potential of electronic commerce for both researchers and practitioners.", "title": "" }, { "docid": "fc06673e86c237e06d9e927e2f6468a8", "text": "Locality sensitive hashing (LSH) is a computationally efficient alternative to the distance based anomaly detection. The main advantages of LSH lie in constant detection time, low memory requirement, and simple implementation. However, since the metric of distance in LSHs does not consider the property of normal training data, a naive use of existing LSHs would not perform well. In this paper, we propose a new hashing scheme so that hash functions are selected dependently on the properties of the normal training data for reliable anomaly detection. The distance metric of the proposed method, called NSH (Normality Sensitive Hashing) is theoretically interpreted in terms of the region of normal training data and its effectiveness is demonstrated through experiments on real-world data. Our results are favorably comparable to state-of-the arts with the low-level features.", "title": "" }, { "docid": "f95ace29fea990f496f011446d4ed88f", "text": "Deep-learning has dramatically changed the world overnight. It greatly boosted the development of visual perception, object detection, and speech recognition, etc. That was attributed to the multiple convolutional processing layers for abstraction of learning representations from massive data. The advantages of deep convolutional structures in data processing motivated the applications of artificial intelligence methods in robotic problems, especially perception and control system, the two typical and challenging problems in robotics. This paper presents a survey of the deep-learning research landscape in mobile robotics. We start with introducing the definition and development of deep-learning in related fields, especially the essential distinctions between image processing and robotic tasks. We described and discussed several typical applications and related works in this domain, followed by the benefits from deeplearning, and related existing frameworks. Besides, operation in the complex dynamic environment is regarded as a critical bottleneck for mobile robots, such as that for autonomous driving. We thus further emphasize the recent achievement on how deeplearning contributes to navigation and control systems for mobile robots. At the end, we discuss the open challenges and research frontiers.", "title": "" }, { "docid": "4fc4008c6762a18fef474ad251359bfa", "text": "Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, and provide a review of the relevant literature. Finally, we address security implications from self-improving intelligent software.", "title": "" }, { "docid": "d34b81ac6c521cbf466b4b898486a201", "text": "We introduce the novel task of identifying important citations in scholarly literature, i.e., citations that indicate that the cited work is used or extended in the new effort. We believe this task is a crucial component in algorithms that detect and follow research topics and in methods that measure the quality of publications. We model this task as a supervised classification problem at two levels of detail: a coarse one with classes (important vs. non-important), and a more detailed one with four importance classes. We annotate a dataset of approximately 450 citations with this information, and release it publicly. We propose a supervised classification approach that addresses this task with a battery of features that range from citation counts to where the citation appears in the body of the paper, and show that, our approach achieves a precision of 65% for a recall of 90%.", "title": "" }, { "docid": "737a7c63bab1a6688ec280d5d1abc7b5", "text": "Medicine continues to struggle in its approaches to numerous common subjective pain syndromes that lack objective signs and remain treatment resistant. Foremost among these are migraine, fibromyalgia, and irritable bowel syndrome, disorders that may overlap in their affected populations and whose sufferers have all endured the stigma of a psychosomatic label, as well as the failure of endless pharmacotherapeutic interventions with substandard benefit. The commonality in symptomatology in these conditions displaying hyperalgesia and central sensitization with possible common underlying pathophysiology suggests that a clinical endocannabinoid deficiency might characterize their origin. Its base hypothesis is that all humans have an underlying endocannabinoid tone that is a reflection of levels of the endocannabinoids, anandamide (arachidonylethanolamide), and 2-arachidonoylglycerol, their production, metabolism, and the relative abundance and state of cannabinoid receptors. Its theory is that in certain conditions, whether congenital or acquired, endocannabinoid tone becomes deficient and productive of pathophysiological syndromes. When first proposed in 2001 and subsequently, this theory was based on genetic overlap and comorbidity, patterns of symptomatology that could be mediated by the endocannabinoid system (ECS), and the fact that exogenous cannabinoid treatment frequently provided symptomatic benefit. However, objective proof and formal clinical trial data were lacking. Currently, however, statistically significant differences in cerebrospinal fluid anandamide levels have been documented in migraineurs, and advanced imaging studies have demonstrated ECS hypofunction in post-traumatic stress disorder. Additional studies have provided a firmer foundation for the theory, while clinical data have also produced evidence for decreased pain, improved sleep, and other benefits to cannabinoid treatment and adjunctive lifestyle approaches affecting the ECS.", "title": "" } ]
scidocsrr
31ccf1dd6e5a1eb7e95939d057258805
An efficient lane detection algorithm for lane departure detection
[ { "docid": "b44df1268804e966734ea404b8c29360", "text": "A new night-time lane detection system and its accompanying framework are presented in this paper. The accompanying framework consists of an automated ground truth process and systematic storage of captured videos that will be used for training and testing. The proposed Advanced Lane Detector 2.0 (ALD 2.0) is an improvement over the ALD 1.0 or Layered Approach with integration of pixel remapping, outlier removal, and prediction with tracking. Additionally, a novel procedure to generate the ground truth data for lane marker locations is also proposed. The procedure consists of an original process called time slicing, which provides the user with unique visualization of the captured video and enables quick generation of ground truth information. Finally, the setup and implementation of a database hosting lane detection videos and standardized data sets for testing are also described. The ALD 2.0 is evaluated by means of the user-created annotations accompanying the videos. Finally, the planned improvements and remaining work are addressed.", "title": "" } ]
[ { "docid": "d3049fee1ed622515f5332bcfa3bdd7b", "text": "PURPOSE\nTo prospectively analyze, using validated outcome measures, symptom improvement in patients with mild to moderate cubital tunnel syndrome treated with rigid night splinting and activity modifications.\n\n\nMETHODS\nNineteen patients (25 extremities) were enrolled prospectively between August 2009 and January 2011 following a diagnosis of idiopathic cubital tunnel syndrome. Patients were treated with activity modifications as well as a 3-month course of rigid night splinting maintaining 45° of elbow flexion. Treatment failure was defined as progression to operative management. Outcome measures included patient-reported splinting compliance as well as the Quick Disabilities of the Arm, Shoulder, and Hand questionnaire and the Short Form-12. Follow-up included a standardized physical examination. Subgroup analysis included an examination of the association between splinting success and ulnar nerve hypermobility.\n\n\nRESULTS\nTwenty-four of 25 extremities were available at mean follow-up of 2 years (range, 15-32 mo). Twenty-one of 24 (88%) extremities were successfully treated without surgery. We observed a high compliance rate with the splinting protocol during the 3-month treatment period. Quick Disabilities of the Arm, Shoulder, and Hand scores improved significantly from 29 to 11, Short Form-12 physical component summary score improved significantly from 45 to 54, and Short Form-12 mental component summary score improved significantly from 54 to 62. Average grip strength increased significantly from 32 kg to 35 kg, and ulnar nerve provocative testing resolved in 82% of patients available for follow-up examination.\n\n\nCONCLUSIONS\nRigid night splinting when combined with activity modification appears to be a successful, well-tolerated, and durable treatment modality in the management of cubital tunnel syndrome. We recommend that patients presenting with mild to moderate symptoms consider initial treatment with activity modification and rigid night splinting for 3 months based on a high likelihood of avoiding surgical intervention.\n\n\nTYPE OF STUDY/LEVEL OF EVIDENCE\nTherapeutic II.", "title": "" }, { "docid": "64cf7bd992bc6fea358273497d962619", "text": "Magnetic skyrmions are promising candidates for next-generation information carriers, owing to their small size, topological stability, and ultralow depinning current density. A wide variety of skyrmionic device concepts and prototypes have recently been proposed, highlighting their potential applications. Furthermore, the intrinsic properties of skyrmions enable new functionalities that may be inaccessible to conventional electronic devices. Here, we report on a skyrmion-based artificial synapse device for neuromorphic systems. The synaptic weight of the proposed device can be strengthened/weakened by positive/negative stimuli, mimicking the potentiation/depression process of a biological synapse. Both short-term plasticity and long-term potentiation functionalities have been demonstrated with micromagnetic simulations. This proposal suggests new possibilities for synaptic devices in neuromorphic systems with adaptive learning function.", "title": "" }, { "docid": "8b79816cc07237489dafde316514702a", "text": "In this dataset paper we describe our work on the collection and analysis of public WhatsApp group data. Our primary goal is to explore the feasibility of collecting and using WhatsApp data for social science research. We therefore present a generalisable data collection methodology, and a publicly available dataset for use by other researchers. To provide context, we perform statistical exploration to allow researchers to understand what public WhatsApp group data can be collected and how this data can be used. Given the widespread use of WhatsApp, our techniques to obtain public data and potential applications are important for the community.", "title": "" }, { "docid": "fc67e1213423e599d488a1974d29bca0", "text": "The next generation communication system demands for high data rate transfer leading towards exploring a higher level of frequency spectrum. In view of this demand, design of substrate integrated waveguide filters has been presented here in conjunction with metamaterial technology to increase the performance. A metamaterial based substrate integrated waveguide filter operating in the K band (18 – 26.5 GHz) has been demonstrated in this paper with the insertion loss of −0.57 dB in passband and provides a rejection band of 4.1 GHz.", "title": "" }, { "docid": "2c9f7053d9bcd6bc421b133dd7e62d08", "text": "Recurrent neural networks (RNN) combined with attention mechanism has proved to be useful for various NLP tasks including machine translation, sequence labeling and syntactic parsing. The attention mechanism is usually applied by estimating the weights (or importance) of inputs and taking the weighted sum of inputs as derived features. Although such features have demonstrated their effectiveness, they may fail to capture the sequence information due to the simple weighted sum being used to produce them. The order of the words does matter to the meaning or the structure of the sentences, especially for syntactic parsing, which aims to recover the structure from a sequence of words. In this study, we propose an RNN-based attention to capture the relevant and sequence-preserved features from a sentence, and use the derived features to perform the dependency parsing. We evaluated the graph-based and transition-based parsing models enhanced with the RNN-based sequence-preserved attention on the both English PTB and Chinese CTB datasets. The experimental results show that the enhanced systems were improved with significant increase in parsing accuracy.", "title": "" }, { "docid": "79910e1dadf52be1b278d2e57d9bdb9e", "text": "Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.", "title": "" }, { "docid": "8b5ca0f4b12aa5d07619078d44dbb337", "text": "Crimeware-as-a-service (CaaS) has become a prominent component of the underground economy. CaaS provides a new dimension to cyber crime by making it more organized, automated, and accessible to criminals with limited technical skills. This paper dissects CaaS and explains the essence of the underground economy that has grown around it. The paper also describes the various crimeware services that are provided in the underground", "title": "" }, { "docid": "1c01d2d8d9a11fa71b811a5afbfc0250", "text": "This paper describes an interactive tour-guide robot, whic h was successfully exhibited in a Smithsonian museum. During its two weeks of operation, the robot interacted with more than 50,000 people, traversing more than 44km. Our approach specifically addresses issues such as safe navigation in unmodified and dynamic environments, and shortterm human-robot interaction.", "title": "" }, { "docid": "8dee3ada764a40fce6b5676287496ccd", "text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.", "title": "" }, { "docid": "5dc25d44b0ae6ee44ee7e24832b1bc25", "text": "The present research aims to investigate the students' perceptions levels of Edmodo and Mobile learning and to identify the real barriers of them at Taibah University in KSA. After implemented Edmodo application as an Mlearning platform, two scales were applied on the research sample, the first scale consisted of 36 statements was constructed to measure students' perceptions towards Edmodo and M-learning, and the second scale consisted of 17 items was constructed to determine the barriers of Edmodo and M-learning. The scales were distributed on 27 students during the second semester of the academic year 2013/2014. Findings indicated that students' perceptions of Edmodo and Mobile learning is in “High” level in general, and majority of students have positive perceptions towards Edmodo and Mobile learning since they think that learning using Edmodo facilitates and increases effectiveness communication of learning, and they appreciate Edmodo because it save time. Regarding the barriers of Edmodo and Mobile learning that facing several students seem like normal range, however, they were facing a problem of low mobile battery, and storing large files in their mobile phones, but they do not face any difficulty to enter the information on small screen size of mobile devices. Finally, it is suggested adding a section for M-learning in the universities to start application of M-learning and prepare a visible and audible guide for using of M-learning in teaching and learning.", "title": "" }, { "docid": "4c004745828100f6ccc6fd660ee93125", "text": "Steganography has been proposed as a new alternative technique to enforce data security. Lately, novel and versatile audio steganographic methods have been proposed. A perfect audio Steganographic technique aim at embedding data in an imperceptible, robust and secure way and then extracting it by authorized people. Hence, up to date the main challenge in digital audio steganography is to obtain robust high capacity steganographic systems. Leaning towards designing a system that ensures high capacity or robustness and security of embedded data has led to great diversity in the existing steganographic techniques. In this paper, we present a current state of art literature in digital audio steganographic techniques. We explore their potentials and limitations to ensure secure communication. A comparison and an evaluation for the reviewed techniques is also presented in this paper.", "title": "" }, { "docid": "92963d6a511d5e0a767aa34f8932fe86", "text": "A 77-GHz transmit-array on dual-layer printed circuit board (PCB) is proposed for automotive radar applications. Coplanar patch unit-cells are etched on opposite sides of the PCB and connected by through-via. The unit-cells are arranged in concentric rings to form the transmit-array for 1-bit in-phase transmission. When combined with four-substrate-integrated waveguide (SIW) slot antennas as the primary feeds, the transmit-array is able to generate four beams with a specific coverage of ±15°. The simulated and measured results of the antenna prototype at 76.5 GHz agree well, with gain greater than 18.5 dBi. The coplanar structure significantly simplifies the transmit-array design and eases the fabrication, in particular, at millimeter-wave frequencies.", "title": "" }, { "docid": "c27eecae33fe87779d3452002c1bdf8a", "text": "When intelligent agents learn visuomotor behaviors from human demonstrations, they may benefit from knowing where the human is allocating visual attention, which can be inferred from their gaze. A wealth of information regarding intelligent decision making is conveyed by human gaze allocation; hence, exploiting such information has the potential to improve the agents’ performance. With this motivation, we propose the AGIL (Attention Guided Imitation Learning) framework. We collect high-quality human action and gaze data while playing Atari games in a carefully controlled experimental setting. Using these data, we first train a deep neural network that can predict human gaze positions and visual attention with high accuracy (the gaze network) and then train another network to predict human actions (the policy network). Incorporating the learned attention model from the gaze network into the policy network significantly improves the action prediction accuracy and task performance.", "title": "" }, { "docid": "fcd349147673758eedb6dba0cd7af850", "text": "We present VideoLSTM for end-to-end sequence learning of actions in video. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video has a spatial layout. To exploit the spatial correlation we hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about the action content, but also guides better the attention towards the relevant spatio-temporal locations. We introduce motion-based attention. And finally, we demonstrate how the attention from VideoLSTM can be exploited for action localization by relying on the action class label and temporal attention smoothing. Experiments on UCF101, HMDB51 and THUMOS13 reveal the benefit of the video-specific adaptations of VideoLSTM in isolation as well as when integrated in a combined architecture. It compares favorably against other LSTM architectures for action classification and especially action localization.", "title": "" }, { "docid": "d71040311b8753299377b02023ba5b4c", "text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.", "title": "" }, { "docid": "9794653cc79a0835851fdc890e908823", "text": "In 1988, Hickerson proved the celebrated “mock theta conjectures”, a collection of ten identities from Ramanujan’s “lost notebook” which express certain modular forms as linear combinations of mock theta functions. In the context of Maass forms, these identities arise from the peculiar phenomenon that two different harmonic Maass forms may have the same non-holomorphic parts. Using this perspective, we construct several infinite families of modular forms which are differences of mock theta functions.", "title": "" }, { "docid": "722b045f93c8535c64cc87a47b8c8d1f", "text": "The kelp Laminaria digitata (Hudson) J.V. Lamouroux (Laminariales, Phaeophyceae) is currently cultivated on a small-scale in several north Atlantic countries, with much potential for expansion. The initial stages of kelp cultivation follow one of two methods: either maximising (gametophyte method) or minimising (direct method) the vegetative growth phase prior to gametogenesis. The gametophyte method is of increasing interest because of its utility in strain selection programmes. In spite of this, there are no studies of L. digitata gametophyte growth and reproductive capacity under commercially relevant conditions. Vegetative growth measured by length and biomass, and rate of gametogenesis, was examined in a series of experiments. A two-way fixed-effects model was used to examine the effects of both photoperiod (8:12; 12:12; 16:8, 24:0 L:D) and commonly used/commercially available growth media (f/2; Algoflash; Provasoli Enriched Seawater) on the aforementioned parameters. All media resulted in good performance of gametophytes under conditions favouring vegetative growth, while f/2 clearly resulted in better gametophyte performance and a faster rate of gametogenesis under conditions stimulating transition to fertility. Particularly, the extent of sporophyte production (% of gametophytes that produced sporophytes) at the end of the experiment showed clear differences between treatments in favour of f/2: f/2 = 30%; Algoflash = 9%; Provasoli Enriched Seawater = 2%. The effect of photoperiod was ambiguous, with evidence to suggest that the benefit of continuous illumination is less than expected. Confirmation of photoperiodic effect is necessary, using biomass as a measure of productivity and taking greater account of effects of genotypic variability.", "title": "" }, { "docid": "0e459d7e3ffbf23c973d4843f701a727", "text": "The role of psychological flexibility in mental health stigma and psychological distress for the stigmatizer.", "title": "" }, { "docid": "42a6b6ac31383046cf11bcf16da3207e", "text": "Epigenome-wide association studies represent one means of applying genome-wide assays to identify molecular events that could be associated with human phenotypes. The epigenome is especially intriguing as a target for study, as epigenetic regulatory processes are, by definition, heritable from parent to daughter cells and are found to have transcriptional regulatory properties. As such, the epigenome is an attractive candidate for mediating long-term responses to cellular stimuli, such as environmental effects modifying disease risk. Such epigenomic studies represent a broader category of disease -omics, which suffer from multiple problems in design and execution that severely limit their interpretability. Here we define many of the problems with current epigenomic studies and propose solutions that can be applied to allow this and other disease -omics studies to achieve their potential for generating valuable insights.", "title": "" }, { "docid": "9cdddf98d24d100c752ea9d2b368bb77", "text": "Using predictive models to identify patterns that can act as biomarkers for different neuropathoglogical conditions is becoming highly prevalent. In this paper, we consider the problem of Autism Spectrum Disorder (ASD) classification where previous work has shown that it can be beneficial to incorporate a wide variety of meta features, such as socio-cultural traits, into predictive modeling. A graph-based approach naturally suits these scenarios, where a contextual graph captures traits that characterize a population, while the specific brain activity patterns are utilized as a multivariate signal at the nodes. Graph neural networks have shown improvements in inferencing with graph-structured data. Though the underlying graph strongly dictates the overall performance, there exists no systematic way of choosing an appropriate graph in practice, thus making predictive models non-robust. To address this, we propose a bootstrapped version of graph convolutional neural networks (G-CNNs) that utilizes an ensemble of weakly trained G-CNNs, and reduce the sensitivity of models on the choice of graph construction. We demonstrate its effectiveness on the challenging Autism Brain Imaging Data Exchange (ABIDE) dataset and show that our approach improves upon recently proposed graph-based neural networks. We also show that our method remains more robust to noisy graphs.", "title": "" } ]
scidocsrr
eec3b8577a6a4e08957132ee20df5fb2
Management accounting and integrated information systems: A literature review
[ { "docid": "97cfd37d4dc87bbd2c454d07d5ec664e", "text": "The current study examined the longitudinal impact of ERP adoption on firm performance by matching 63 firms identified by Hayes et al. [J. Inf. Syst. 15 (2001) 3] with peer firms that had not adopted ERP systems. Results indicate that return on assets (ROA), return on investment (ROI), and asset turnover (ATO) were significantly better over a 3-year period for adopters, as compared to nonadopters. Interestingly, our results are consistent with Poston and Grabski [Int. J. Account. Inf. Syst. 2 (2001) 271] who reported no preto post-adoption improvement in financial performance for ERP firms. Rather, significant differences arise in the current study because the financial performance of nonadopters decreased over time while it held steady for adopters. We also report a significant interaction between firm size and financial health for ERP adopters with respect to ROA, ROI, and return on sales (ROS). Specifically, we found a positive (negative) relationship between financial health and performance for small (large) firms. Study findings shed new light on the productivity paradox associated with ERP systems and suggest that ERP adoption helps firms gain a competitive advantage over nonadopters. D 2003 Elsevier Science Inc. All rights reserved.", "title": "" }, { "docid": "3a061755fbb1291046b95ba425dfe77e", "text": "Understanding the return on investments in information technology (IT) is the focus of a large and growing body of research. The objective of this paper is to synthesize this research and develop a model to guide future research in the evaluation of information technology investments. We focus on archival studies that use accounting or market measures of firm performance. We emphasize those studies where accounting researchers with interest in market-level analyses of systems and technology issues may hold a competitive advantage over traditional information systems (IS) researchers. We propose numerous opportunities for future research. These include examining the relation between IT and business processes, and business processes and overall firm performance, understanding the effect of contextual factors on the IT-performance relation, examining the IT-performance relation in an international context, and examining the interactive effects of IT spending and IT management on firm performance.", "title": "" } ]
[ { "docid": "c077231164a8a58f339f80b83e5b4025", "text": "It is widely believed that refactoring improves software quality and developer productivity. However, few empirical studies quantitatively assess refactoring benefits or investigate developers' perception towards these benefits. This paper presents a field study of refactoring benefits and challenges at Microsoft through three complementary study methods: a survey, semi-structured interviews with professional software engineers, and quantitative analysis of version history data. Our survey finds that the refactoring definition in practice is not confined to a rigorous definition of semantics-preserving code transformations and that developers perceive that refactoring involves substantial cost and risks. We also report on interviews with a designated refactoring team that has led a multi-year, centralized effort on refactoring Windows. The quantitative analysis of Windows 7 version history finds that the binary modules refactored by this team experienced significant reduction in the number of inter-module dependencies and post-release defects, indicating a visible benefit of refactoring.", "title": "" }, { "docid": "1dac710a7c845bd3a55d8d92c18e3648", "text": "PURPOSE\nWe have conducted experiments with an innovatively designed robot endoscope holder for laparoscopic surgery that is small and low cost.\n\n\nMATERIALS AND METHODS\nA compact light endoscope robot (LER) that is placed on the patient's skin and can be used with the patient in the lateral or dorsal supine position was tested on cadavers and laboratory pigs in order to allow successive modifications. The current control system is based on voice recognition. The range of vision is 360 degrees with an angle of 160 degrees . Twenty-three procedures were performed.\n\n\nRESULTS\nThe tests made it possible to advance the prototype on a variety of aspects, including reliability, steadiness, ergonomics, and dimensions. The ease of installation of the robot, which takes only 5 minutes, and the easy handling made it possible for 21 of the 23 procedures to be performed without an assistant.\n\n\nCONCLUSION\nThe LER is a camera holder guided by the surgeon's voice that can eliminate the need for an assistant during laparoscopic surgery. The ease of installation and manufacture should make it an effective and inexpensive system for use on patients in the lateral and dorsal supine positions. Randomized clinical trials will soon validate a new version of this robot prior to marketing.", "title": "" }, { "docid": "4cb41f9de259f18cd8fe52d2f04756a6", "text": "The Effects of Lottery Prizes on Winners and their Neighbors: Evidence from the Dutch Postcode Lottery Each week, the Dutch Postcode Lottery (PCL) randomly selects a postal code, and distributes cash and a new BMW to lottery participants in that code. We study the effects of these shocks on lottery winners and their neighbors. Consistent with the life-cycle hypothesis, the effects on winners’ consumption are largely confined to cars and other durables. Consistent with the theory of in-kind transfers, the vast majority of BMW winners liquidate their BMWs. We do, however, detect substantial social effects of lottery winnings: PCL nonparticipants who live next door to winners have significantly higher levels of car consumption than other nonparticipants. JEL Classification: D12, C21", "title": "" }, { "docid": "1cacfd4da5273166debad8a6c1b72754", "text": "This article presents a paradigm case portrait of female romantic partners of heavy pornography users. Based on a sample of 100 personal letters, this portrait focuses on their often traumatic discovery of the pornography usage and the significance they attach to this usage for (a) their relationships, (b) their own worth and desirability, and (c) the character of their partners. Finally, we provide a number of therapeutic recommendations for helping these women to think and act more effectively in their very difficult circumstances.", "title": "" }, { "docid": "716cb240d2fcf14d3f248e02d79d9d57", "text": "OBJECTIVE\nSocial media is becoming increasingly popular as a platform for sharing personal health-related information. This information can be utilized for public health monitoring tasks, particularly for pharmacovigilance, via the use of natural language processing (NLP) techniques. However, the language in social media is highly informal, and user-expressed medical concepts are often nontechnical, descriptive, and challenging to extract. There has been limited progress in addressing these challenges, and thus far, advanced machine learning-based NLP techniques have been underutilized. Our objective is to design a machine learning-based approach to extract mentions of adverse drug reactions (ADRs) from highly informal text in social media.\n\n\nMETHODS\nWe introduce ADRMine, a machine learning-based concept extraction system that uses conditional random fields (CRFs). ADRMine utilizes a variety of features, including a novel feature for modeling words' semantic similarities. The similarities are modeled by clustering words based on unsupervised, pretrained word representation vectors (embeddings) generated from unlabeled user posts in social media using a deep learning technique.\n\n\nRESULTS\nADRMine outperforms several strong baseline systems in the ADR extraction task by achieving an F-measure of 0.82. Feature analysis demonstrates that the proposed word cluster features significantly improve extraction performance.\n\n\nCONCLUSION\nIt is possible to extract complex medical concepts, with relatively high performance, from informal, user-generated content. Our approach is particularly scalable, suitable for social media mining, as it relies on large volumes of unlabeled data, thus diminishing the need for large, annotated training data sets.", "title": "" }, { "docid": "eed511e921c130204354cafceb5b0624", "text": "Mobile technology has become increasingly common in today’s everyday life. However, mobile payment is surprisingly not among the frequently used mobile services, although technologically advanced solutions exist. Apparently, there is still a lack of acceptance of mobile payment services among consumers. The conceptual model developed and tested in this research thus focuses on factors determining consumers’ acceptance of mobile payment services. The empirical results show particularly strong support for the effects of compatibility, individual mobility, and subjective norm. Our study offers several implications for managers in regards to marketing mobile payment solutions to increase consumers’ intention to use these services. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5487dd1976a164447c821303b53ebdf8", "text": "Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.", "title": "" }, { "docid": "66fa9b79b1034e1fa3bf19857b5367c2", "text": "We propose a boundedly-rational model of opinion formation in which individuals are subject to persuasion bias; that is, they fail to account for possible repetition in the information they receive. We show that persuasion bias implies the phenomenon of social influence, whereby one’s influence on group opinions depends not only on accuracy, but also on how well-connected one is in the social network that determines communication. Persuasion bias also implies the phenomenon of unidimensional opinions; that is, individuals’ opinions over a multidimensional set of issues converge to a single “left-right” spectrum. We explore the implications of our model in several natural settings, including political science and marketing, and we obtain a number of novel empirical implications. DeMarzo and Zwiebel: Graduate School of Business, Stanford University, Stanford CA 94305, Vayanos: MIT Sloan School of Management, 50 Memorial Drive E52-437, Cambridge MA 02142. This paper is an extensive revision of our paper, “A Model of Persuasion – With Implication for Financial Markets,” (first draft, May 1997). We are grateful to Nick Barberis, Gary Becker, Jonathan Bendor, Larry Blume, Simon Board, Eddie Dekel, Stefano DellaVigna, Darrell Duffie, David Easley, Glenn Ellison, Simon Gervais, Ed Glaeser, Ken Judd, David Kreps, Edward Lazear, George Loewenstein, Lee Nelson, Anthony Neuberger, Matthew Rabin, José Scheinkman, Antoinette Schoar, Peter Sorenson, Pietro Veronesi, Richard Zeckhauser, three anonymous referees, and seminar participants at the American Finance Association Annual Meetings, Boston University, Cornell, Carnegie-Mellon, ESSEC, the European Summer Symposium in Financial Markets at Gerzensee, HEC, the Hoover Institution, Insead, MIT, the NBER Asset Pricing Conference, the Northwestern Theory Summer Workshop, NYU, the Stanford Institute for Theoretical Economics, Stanford, Texas A&M, UCLA, U.C. Berkeley, Université Libre de Bruxelles, University of Michigan, University of Texas at Austin, University of Tilburg, and the Utah Winter Finance Conference for helpful comments and discussions. All errors are our own.", "title": "" }, { "docid": "d1475e197b300489acedf8c0cbe8f182", "text": "—The publication of IEC 61850-90-1 \" Use of IEC 61850 for the communication between substations \" and the draft of IEC 61850-90-5 \" Use of IEC 61850 to transmit synchrophasor information \" opened the possibility to study IEC 61850 GOOSE Message over WAN not only in the layer 2 (link layer) but also in the layer 3 (network layer) in the OSI model. In this paper we examine different possibilities to make feasible teleprotection in the network layer over WAN sharing the communication channel with automation, management and maintenance convergence services among electrical energy substations.", "title": "" }, { "docid": "c1b34059a896564df02ef984085b93a0", "text": "Robotics has become a standard tool in outreaching to grades K-12 and attracting students to the STEM disciplines. Performing these activities in the class room usually requires substantial time commitment by the teacher and integration into the curriculum requires major effort, which makes spontaneous and short-term engagements difficult. This paper studies using “Cubelets”, a modular robotic construction kit, which requires virtually no setup time and allows substantial engagement and change of perception of STEM in as little as a 1-hour session. This paper describes the constructivist curriculum and provides qualitative and quantitative results on perception changes with respect to STEM and computer science in particular as a field of study.", "title": "" }, { "docid": "15316c80d2a880b06846e8dd398a5c3f", "text": "One weak spot is all it takes to open secured digital doors and online accounts causing untold damage and consequences.", "title": "" }, { "docid": "f8821f651731943ce1652bc8a1d2c0d6", "text": "business units and thus not even practiced in a cohesive, coherent manner. In the worst cases, busy business unit executives trade roving bands of developers like Pokémon cards in a fifth-grade classroom (in an attempt to get ahead). Suffice it to say, none of this is good. The disconnect between security and development has ultimately produced software development efforts that lack any sort of contemporary understanding of technical security risks. Today's complex and highly connected computing environments trigger myriad security concerns, so by blowing off the idea of security entirely, software builders virtually guarantee that their creations will have way too many security weaknesses that could—and should—have been avoided. This article presents some recommendations for solving this problem. Our approach is born out of experience in two diverse fields: software security and information security. Central among our recommendations is the notion of using the knowledge inherent in information security organizations to enhance secure software development efforts. Don't stand so close to me Best practices in software security include a manageable number of simple activities that should be applied throughout any software development process (see Figure 1). These lightweight activities should start at the earliest stages of software development and then continue throughout the development process and into deployment and operations. Although an increasing number of software shops and individual developers are adopting the software security touchpoints we describe here as their own, they often lack the requisite security domain knowledge required to do so. This critical knowledge arises from years of observing system intrusions, dealing with malicious hackers, suffering the consequences of software vulnera-bilities, and so on. Put in this position , even the best-intended development efforts can fail to take into account real-world attacks previously observed on similar application architectures. Although recent books 1,2 are starting to turn this knowledge gap around, the science of attack is a novel one. Information security staff—in particular, incident handlers and vulnerability/patch specialists— have spent years responding to attacks against real systems and thinking about the vulnerabilities that spawned them. In many cases, they've studied software vulnerabili-ties and their resulting attack profiles in minute detail. However, few information security professionals are software developers (at least, on a full-time basis), and their solution sets tend to be limited to reactive techniques such as installing software patches, shoring up firewalls, updating intrusion detection signature databases, and the like. It's very rare to find information security …", "title": "" }, { "docid": "df4b4119653789266134cf0b7571e332", "text": "Automatic detection of lymphocyte in H&E images is a necessary first step in lots of tissue image analysis algorithms. An accurate and robust automated lymphocyte detection approach is of great importance in both computer science and clinical studies. Most of the existing approaches for lymphocyte detection are based on traditional image processing algorithms and/or classic machine learning methods. In the recent years, deep learning techniques have fundamentally transformed the way that a computer interprets images and have become a matchless solution in various pattern recognition problems. In this work, we design a new deep neural network model which extends the fully convolutional network by combining the ideas in several recent techniques, such as shortcut links. Also, we design a new training scheme taking the prior knowledge about lymphocytes into consideration. The training scheme not only efficiently exploits the limited amount of free-form annotations from pathologists, but also naturally supports efficient fine-tuning. As a consequence, our model has the potential of self-improvement by leveraging the errors collected during real applications. Our experiments show that our deep neural network model achieves good performance in the images of different staining conditions or different types of tissues.", "title": "" }, { "docid": "8306854901811a5a64a2a2fe8ec554d0", "text": "OBJECTIVE\nTo summarise the benefits and harms of treatments for women with gestational diabetes mellitus.\n\n\nDESIGN\nSystematic review and meta-analysis of randomised controlled trials.\n\n\nDATA SOURCES\nEmbase, Medline, AMED, BIOSIS, CCMed, CDMS, CDSR, CENTRAL, CINAHL, DARE, HTA, NHS EED, Heclinet, SciSearch, several publishers' databases, and reference lists of relevant secondary literature up to October 2009. Review methods Included studies were randomised controlled trials of specific treatment for gestational diabetes compared with usual care or \"intensified\" compared with \"less intensified\" specific treatment.\n\n\nRESULTS\nFive randomised controlled trials matched the inclusion criteria for specific versus usual treatment. All studies used a two step approach with a 50 g glucose challenge test or screening for risk factors, or both, and a subsequent 75 g or 100 g oral glucose tolerance test. Meta-analyses did not show significant differences for most single end points judged to be of direct clinical importance. In women specifically treated for gestational diabetes, shoulder dystocia was significantly less common (odds ratio 0.40, 95% confidence interval 0.21 to 0.75), and one randomised controlled trial reported a significant reduction of pre-eclampsia (2.5 v 5.5%, P=0.02). For the surrogate end point of large for gestational age infants, the odds ratio was 0.48 (0.38 to 0.62). In the 13 randomised controlled trials of different intensities of specific treatments, meta-analysis showed a significant reduction of shoulder dystocia in women with more intensive treatment (0.31, 0.14 to 0.70).\n\n\nCONCLUSIONS\nTreatment for gestational diabetes, consisting of treatment to lower blood glucose concentration alone or with special obstetric care, seems to lower the risk for some perinatal complications. Decisions regarding treatment should take into account that the evidence of benefit is derived from trials for which women were selected with a two step strategy (glucose challenge test/screening for risk factors and oral glucose tolerance test).", "title": "" }, { "docid": "c18910a5fd622da55f2a2bc61703d6b8", "text": "The emergence of online social networks has revolutionized the way people seek and share information. Nowadays, popular online social sites as Twitter, Facebook and Google+ are among the major news sources as well as the most effective channels for viral marketing. However, these networks also became the most effective channel for spreading misinformation, accidentally or maliciously. The widespread diffusion of inaccurate information or fake news can lead to undesirable and severe consequences, such as widespread panic, libelous campaigns and conspiracies. In order to guarantee the trustworthiness of online social networks it is a crucial challenge to find effective strategies to contrast the spread of the misinformation in the network. In this paper we concentrate our attention on two problems related to the diffusion of misinformation in social networks: identify the misinformation sources and limit its diffusion in the network. We consider a social network where some nodes have already been infected from misinformation. We first provide an heuristics to recognize the set of most probable sources of the infection. Then, we provide an heuristics to place a few monitors in some network nodes in order to control information diffused by the suspected nodes and block misinformation they injected in the network before it reaches a large part of the network. To verify the quality and efficiency of our suggested solutions, we conduct experiments on several real-world networks. Empirical results indicate that our heuristics are among the most effective known in literature.", "title": "" }, { "docid": "ca4e3f243b2868445ecb916c081e108e", "text": "The task in the multi-agent path finding problem (MAPF) is to find paths for multiple agents, each with a different start and goal position, such that agents do not collide. It is possible to solve this problem optimally with algorithms that are based on the A* algorithm. Recently, we proposed an alternative algorithm called Conflict-Based Search (CBS) (Sharon et al. 2012), which was shown to outperform the A*-based algorithms in some cases. CBS is a two-level algorithm. At the high level, a search is performed on a tree based on conflicts between agents. At the low level, a search is performed only for a single agent at a time. While in some cases CBS is very efficient, in other cases it is worse than A*-based algorithms. This paper focuses on the latter case by generalizing CBS to Meta-Agent CBS (MA-CBS). The main idea is to couple groups of agents into meta-agents if the number of internal conflicts between them exceeds a given bound. MACBS acts as a framework that can run on top of any complete MAPF solver. We analyze our new approach and provide experimental results demonstrating that it outperforms basic CBS and other A*-based optimal solvers in many cases. Introduction and Background In the multi-agent path finding (MAPF) problem, we are given a graph, G(V,E), and a set of k agents labeled a1 . . . ak. Each agent ai has a start position si ∈ V and goal position gi ∈ V . At each time step an agent can either move to a neighboring location or can wait in its current location. The task is to return the least-cost set of actions for all agents that will move each of the agents to its goal without conflicting with other agents (i.e., without being in the same location at the same time or crossing the same edge simultaneously in opposite directions). MAPF has practical applications in robotics, video games, vehicle routing, and other domains (Silver 2005; Dresner & Stone 2008). In its general form, MAPF is NPcomplete, because it is a generalization of the sliding tile puzzle, which is NP-complete (Ratner & Warrnuth 1986). There are many variants to the MAPF problem. In this paper we consider the following common setting. The cumulative cost function to minimize is the sum over all agents of the number of time steps required to reach the goal location (Standley 2010; Sharon et al. 2011a). Both move Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and wait actions cost one. A centralized computing setting with a single CPU that controls all the agents is assumed. Note that a centralized computing setting is logically equivalent to a decentralized setting where each agent has its own computing power but agents are fully cooperative with full knowledge sharing and free communication. There are two main approaches for solving the MAPF in the centralized computing setting: the coupled and the decoupled approaches. In the decoupled approach, paths are planned for each agent separately. Algorithms from the decoupled approach run relatively fast, but optimality and even completeness are not always guaranteed (Silver 2005; Wang & Botea 2008; Jansen & Sturtevant 2008). New complete (but not optimal) decoupled algorithms were recently introduced for trees (Khorshid, Holte, & Sturtevant 2011) and for general graphs (Luna & Bekris 2011). Our aim is to solve the MAPF problem optimally and therefore the focus of this paper is on the coupled approach. In this approach MAPF is formalized as a global, singleagent search problem. One can activate an A*-based algorithm that searches a state space that includes all the different ways to permute the k agents into |V | locations. Consequently, the state space that is searched by the A*-based algorithms grow exponentially with the number of agents. Hence, finding the optimal solutions with A*-based algorithms requires significant computational expense. Previous optimal solvers dealt with this large search space in several ways. Ryan (2008; 2010) abstracted the problem into pre-defined structures such as cliques, halls and rings. He then modeled and solved the problem as a CSP problem. Note that the algorithm Ryan proposed does not necessarily returns the optimal solutions. Standley (2010; 2011) partitioned the given problem into smaller independent problems, if possible. Sharon et. al. (2011a; 2011b) suggested the increasing cost search tree (ICTS) a two-level framework where the high-level phase searches a tree with exact path costs for each of the agents and the low-level phase aims to verify whether there is a solution of this cost. In this paper we focus on the new Conflict Based Search algorithm (CBS) (Sharon et al. 2012) which optimally solves MAPF. CBS is a two-level algorithm where the highlevel search is performed on a constraint tree (CT) whose nodes include constraints on time and locations of a single agent. At each node in the constraint tree a low-level search is performed to find individual paths for all agents under the constraints given by the high-level node. Sharon et al. (2011a; 2011b; 2012) showed that the behavior of optimal MAPF algorithms can be very sensitive to characteristics of the given problem instance such as the topology and size of the graph, the number of agents, the branching factor etc. There is no universally dominant algorithm; different algorithms work well in different circumstances. In particular, experimental results have shown that CBS can significantly outperform all existing optimal MAPF algorithms on some domains (Sharon et al. 2012). However, Sharon et al. (2012) also identified cases where the CBS algorithm performs poorly. In such cases, CBS may even perform exponentially worse than A*. In this paper we aim at mitigating the worst-case performance of CBS by generalizing CBS into a new algorithm called Meta-agent CBS (MA-CBS). In MA-CBS the number of conflicts allowed at the high-level phase between any pair of agents is bounded by a predefined parameter B. When the number of conflicts exceed B, the conflicting agents are merged into a meta-agent and then treated as a joint composite agent by the low-level solver. By bounding the number of conflicts between any pair of agents, we prevent the exponential worst-case of basic CBS. This results in an new MAPF solver that significantly outperforms existing algorithms in a variety of domains. We present both theoretical and empirical support for this claim. In the low-level search, MA-CBS can use any complete MAPF solver. Thus, MA-CBS can be viewed as a solving framework and future MAPF algorithms could also be used by MA-CBS to improve its performance. Furthermore, we show that the original CBS algorithm corresponds to the extreme cases where B = ∞ (never merge agents), and the Independence Dependence (ID) framework (Standley 2010) is the other extreme case where B = 0 (always merge agents when conflicts occur). Thus, MA-CBS allows a continuum between CBS and ID, by setting different values of B between these two extremes. The Conflict Based Search Algorithm (CBS) The MA-CBS algorithm presented in this paper is based on the CBS algorithm (Sharon et al. 2012). We thus first describe the CBS algorithm in detail. Definitions for CBS We use the term path only in the context of a single agent and use the term solution to denote a set of k paths for the given set of k agents. A constraint for a given agent ai is a tuple (ai, v, t) where agent ai is prohibited from occupying vertex v at time step t.1 During the course of the algorithm, agents are associated with constraints. A consistent path for agent ai is a path that satisfies all its constraints. Likewise, a consistent solution is a solution that is made up from paths, such that the path for agent ai is consistent with the constraints of ai. A conflict is a tuple (ai, aj , v, t) where agent ai and agent aj occupy vertex v at time point t. A solution (of k paths) is valid if all its A conflict (as well as a constraint) may apply also to an edge when two agents traverse the same edge in opposite directions. paths have no conflicts. A consistent solution can be invalid if, despite the fact that the paths are consistent with their individual agent constraints, these paths still have conflicts. The key idea of CBS is to grow a set of constraints for each of the agents and find paths that are consistent with these constraints. If these paths have conflicts, and are thus invalid, the conflicts are resolved by adding new constraints. CBS works in two levels. At the high-level phase conflicts are found and constraints are added. At the low-level phase, the paths of the agents are updated to be consistent with the new constraints. We now describe each part of this process. High-level: Search the Constraint Tree (CT) At the high-level, CBS searches a constraint tree (CT). A CT is a binary tree. Each node N in the CT contains the following fields of data: 1. A set of constraints (N.constraints). The root of the CT contains an empty set of constraints. The child of a node in the CT inherits the constraints of the parent and adds one new constraint for one agent. 2. A solution (N.solution). A set of k paths, one path for each agent. The path for agent ai must be consistent with the constraints of ai. Such paths are found by the lowlevel search algorithm. 3. The total cost (N.cost). The cost of the current solution (summation over all the single-agent path costs). We denote this cost the f -value of the node. Node N in the CT is a goal node when N.solution is valid, i.e., the set of paths for all agents have no conflicts. The high-level phase performs a best-first search on the CT where nodes are ordered by their costs. Processing a node in the CT Given the list of constraints for a node N of the CT, the low-level search is invoked. This search returns one shortest path for each agent, ai, that is consistent with all the constraints associated with ai in node N . Once a consistent path has be", "title": "" }, { "docid": "01bfb4c4c164bcb3faf9879284d566d3", "text": "Emotions are multifaceted, but a key aspect of emotion involves the assessment of the value of environmental stimuli. This article reviews the many psychological representations, including representations of stimulus value, which are formed in the brain during Pavlovian and instrumental conditioning tasks. These representations may be related directly to the functions of cortical and subcortical neural structures. The basolateral amygdala (BLA) appears to be required for a Pavlovian conditioned stimulus (CS) to gain access to the current value of the specific unconditioned stimulus (US) that it predicts, while the central nucleus of the amygdala acts as a controller of brainstem arousal and response systems, and subserves some forms of stimulus-response Pavlovian conditioning. The nucleus accumbens, which appears not to be required for knowledge of the contingency between instrumental actions and their outcomes, nevertheless influences instrumental behaviour strongly by allowing Pavlovian CSs to affect the level of instrumental responding (Pavlovian-instrumental transfer), and is required for the normal ability of animals to choose rewards that are delayed. The prelimbic cortex is required for the detection of instrumental action-outcome contingencies, while insular cortex may allow rats to retrieve the values of specific foods via their sensory properties. The orbitofrontal cortex, like the BLA, may represent aspects of reinforcer value that govern instrumental choice behaviour. Finally, the anterior cingulate cortex, implicated in human disorders of emotion and attention, may have multiple roles in responding to the emotional significance of stimuli and to errors in performance, preventing responding to inappropriate stimuli.", "title": "" }, { "docid": "682f09b39cb82492c37789ff6ad66389", "text": "Aging is characterized by a progressive loss of physiological integrity, leading to impaired function and increased vulnerability to death. This deterioration is the primary risk factor for major human pathologies, including cancer, diabetes, cardiovascular disorders, and neurodegenerative diseases. Aging research has experienced an unprecedented advance over recent years, particularly with the discovery that the rate of aging is controlled, at least to some extent, by genetic pathways and biochemical processes conserved in evolution. This Review enumerates nine tentative hallmarks that represent common denominators of aging in different organisms, with special emphasis on mammalian aging. These hallmarks are: genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis, deregulated nutrient sensing, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and altered intercellular communication. A major challenge is to dissect the interconnectedness between the candidate hallmarks and their relative contributions to aging, with the final goal of identifying pharmaceutical targets to improve human health during aging, with minimal side effects.", "title": "" }, { "docid": "4ff50e433ba7a5da179c7d8e5e05cb22", "text": "Social network information is now being used in ways for which it may have not been originally intended. In particular, increased use of smartphones capable ofrunning applications which access social network information enable applications to be aware of a user's location and preferences. However, current models forexchange of this information require users to compromise their privacy and security. We present several of these privacy and security issues, along withour design and implementation of solutions for these issues. Our work allows location-based services to query local mobile devices for users' social network information, without disclosing user identity or compromising users' privacy and security. We contend that it is important that such solutions be acceptedas mobile social networks continue to grow exponentially.", "title": "" } ]
scidocsrr
197b290f9d9260cfefeddd826a582292
Emotions from Text: Machine Learning for Text-based Emotion Prediction
[ { "docid": "8f5ca16c82dfdb7d551fdf203c9ebf7a", "text": "We analyze a few of the commonly used statistics based and machine learning algorithms for natural language disambiguation tasks and observe that they can bc recast as learning linear separators in the feature space. Each of the methods makes a priori assumptions, which it employs, given the data, when searching for its hypothesis. Nevertheless, as we show, it searches a space that is as rich as the space of all linear separators. We use this to build an argument for a data driven approach which merely searches for a good linear separator in the feature space, without further assumptions on the domain or a specific problem. We present such an approach a sparse network of linear separators, utilizing the Winnow learning aigorlthrn and show how to use it in a variety of ambiguity resolution problems. The learning approach presented is attribute-efficient and, therefore, appropriate for domains having very large number of attributes. In particular, we present an extensive experimental comparison of our approach with other methods on several well studied lexical disambiguation tasks such as context-sensltlve spelling correction, prepositional phrase attachment and part of speech tagging. In all cases we show that our approach either outperforms other methods tried for these tasks or performs comparably to the best.", "title": "" }, { "docid": "1e464db177e96b6746f8f827c582cc31", "text": "In order to respond correctly to a free form factual question given a large collection of text data, one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. This work presents the first work on a machine learning approach to question classification. Guided by a layered semantic hierarchy of answer types, we develop a hierarchical classifier that classifies questions into fine-grained classes. This work also performs a systematic study of the use of semantic information sources in natural language classification tasks. It is shown that, in the context of question classification, augmenting the input of the classifier with appropriate semantic category information results in significant improvements to classification accuracy. We show accurate results on a large collection of free-form questions used in TREC 10 and 11.", "title": "" } ]
[ { "docid": "45a6e49fdea0036cd5d4e1b812346827", "text": "Watching a rubber hand being stroked, while one's own unseen hand is synchronously stroked, may cause the rubber hand to be attributed to one's own body, to \"feel like it's my hand.\" A behavioral measure of the rubber hand illusion (RHI) is a drift of the perceived position of one's own hand toward the rubber hand. The authors investigated (a) the influence of general body scheme representations on the RHI in Experiments 1 and 2 and (b) the necessary conditions of visuotactile stimulation underlying the RHI in Experiments 3 and 4. Overall, the results suggest that at the level of the process underlying the build up of the RHI, bottom-up processes of visuotactile correlation drive the illusion as a necessary, but not sufficient, condition. Conversely, at the level of the phenomenological content, the illusion is modulated by top-down influences originating from the representation of one's own body.", "title": "" }, { "docid": "37dc4a306f043684042e6af01223a275", "text": "In recent years, studies about control methods for complex machines and robots have been developed rapidly. Biped robots are often treated as inverted pendulums for its simple structure. But modeling of robot and other complex machines is a time-consuming procedure. A new method of modeling and simulation of robot based on SimMechanics is proposed in this paper. Physical modeling, parameter setting and simulation are presented in detail. The SimMechanics block model is first used in modeling and simulation of inverted pendulums. Simulation results of the SimMechanics block model and mathematical model for single inverted pendulum are compared. Furthermore, a full state feedback controller is designed to satisfy the performance requirement. It indicates that SimMechanics can be used for unstable nonlinear system and robots.", "title": "" }, { "docid": "c2277b2502f5f64c7c7c7c03f992187c", "text": "Purpose – To provide useful references for manufacturing industry which guide the linkage of business strategies and performance indicators for information security projects. Design/methodology/approach – This study uses balanced scorecard (BSC) framework to set up performance index for information security management in organizations. Moreover, BSC used is to strengthen the linkage between foundational performance indicators and progressive business strategy theme. Findings – The general model of information security management builds the strategy map with 12 strategy themes and 35 key performance indicators are established. The development of strategy map also express how to link strategy themes to key performance indicators. Research limitations/implications – The investigation of listed manufacturing companies in Taiwan may limit the application elsewhere. Practical implications – Traditional performance measurement system like return on investment, sales growth is not enough to describe and manage intangible assets. This study based on BSC to measure information security management performance can provide the increasing value from improving measures and management insight in modern business. Originality/value – This study combines the information security researches and organizational performance studies. The result helps organizations to assess values of information security projects and consider how to link projects performance to business strategies.", "title": "" }, { "docid": "f25a5e20c5e92e9a77d708424b05f69d", "text": "Prompt and widely available diagnostics of breast cancer is crucial for the prognosis of patients. One of the diagnostic methods is the analysis of cytological material from the breast. This examination requires extensive knowledge and experience of the cytologist. Computer-aided diagnosis can speed up the diagnostic process and allow for large-scale screening. One of the largest challenges in the automatic analysis of cytological images is the segmentation of nuclei. In this study, four different clustering algorithms are tested and compared in the task of fast nuclei segmentation. K-means, fuzzy C-means, competitive learning neural networks and Gaussian mixture models were incorporated for clustering in the color space along with adaptive thresholding in grayscale. These methods were applied in a medical decision support system for breast cancer diagnosis, where the cases were classified as either benign or malignant. In the segmented nuclei, 42 morphological, topological and texture features were extracted. Then, these features were used in a classification procedure with three different classifiers. The system was tested for classification accuracy by means of microscopic images of fine needle breast biopsies. In cooperation with the Regional Hospital in Zielona Góra, 500 real case medical images from 50 patients were collected. The acquired classification accuracy was approximately 96-100%, which is very promising and shows that the presented method ensures accurate and objective data acquisition that could be used to facilitate breast cancer diagnosis.", "title": "" }, { "docid": "a13114518e3e2303e15bf079508d26aa", "text": "Machine learning algorithms are optimized to model statistical properties of the training data. If the input data reflects stereotypes and biases of the broader society, then the output of the learning algorithm also captures these stereotypes. In this paper, we initiate the study of gender stereotypes in word embedding, a popular framework to represent text data. As their use becomes increasingly common, applications can inadvertently amplify unwanted stereotypes. We show across multiple datasets that the embeddings contain significant gender stereotypes, especially with regard to professions. We created a novel gender analogy task and combined it with crowdsourcing to systematically quantify the gender bias in a given embedding. We developed an efficient algorithm that reduces gender stereotype using just a handful of training examples while preserving the useful geometric properties of the embedding. We evaluated our algorithm on several metrics. While we focus on male/female stereotypes, our framework may be applicable to other types of embedding biases.", "title": "" }, { "docid": "50dc3186ad603ef09be8cca350ff4d77", "text": "Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools.", "title": "" }, { "docid": "1567d68667809bc4bfe642b3b547e1eb", "text": "Chitosan (CS) and sodium alginate (SA) are two widely popular biopolymers which are used for biomedical and pharmaceutical applications from many years. The objective of present study was to study the effect of biofield treatment on physical, chemical and thermal properties of CS and SA. The study was performed in two groups (control and treated). The control group remained as untreated, and biofield treatment was given to treated group. The control and treated polymers were characterized by Fourier transform infrared (FT-IR) spectroscopy, CHNSO analysis, X-ray diffraction (XRD), particle size analysis, differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). FT-IR of treated chitosan showed increase in frequency of –CH stretching (2925→2979 cm1) vibrations with respect to control. However, the treated SA showed increase in frequency of –OH stretching (3182→3284 cm-1) which may be correlated to increase in force constant or bond strength with respect to control. CHNSO results showed significant increase in percentage of oxygen and hydrogen of treated polymers (CS and SA) with respect to control. XRD studies revealed that crystallinity was improved in treated CS as compared to control. The percentage crystallite size was increased significantly by 69.59% in treated CS with respect to control. However, treated SA showed decrease in crystallite size by 41.04% as compared to control sample. The treated SA showed significant reduction in particle size (d50 and d99) with respect to control SA. DSC study showed changes in decomposition temperature in treated CS with respect to control. A significant change in enthalpy was observed in treated polymers (CS and CA) with respect to control. TGA results of treated CS showed decrease in Tmax with respect to control. Likewise, the treated SA also showed decrease in Tmax which could be correlated to reduction in thermal stability after biofield treatment. Overall, the results showed that biofield treatment has significantly changed the physical, chemical and thermal properties of CS and SA. Characterization of Physicochemical and Thermal Properties of Chitosan and Sodium Alginate after Biofield Treatment", "title": "" }, { "docid": "4ec947c0420e47decd6de65330baf820", "text": "Detailed exploration on Brain Computer Interface (BCI) and its recent trends has been done in this paper. Work is being done to identify objects, images, videos and their color compositions. Efforts are on the way in understanding speech, words, emotions, feelings and moods. When humans watch the surrounding environment, visual data is processed by the brain, and it is possible to reconstruct the same on the screen with some appreciable accuracy by analyzing the physiological data. This data is acquired by using one of the non-invasive techniques like electroencephalography (EEG) in BCI. The acquired signal is to be translated to produce the image on to the screen. This paper also lays suitable directions for future work. KeywordsBCI; EEG; brain image reconstruction.", "title": "" }, { "docid": "d6f1278ccb6de695200411137b85b89a", "text": "The complexity of information systems is increasing in recent years, leading to increased effort for maintenance and configuration. Self-adaptive systems (SASs) address this issue. Due to new computing trends, such as pervasive computing, miniaturization of IT leads to mobile devices with the emerging need for context adaptation. Therefore, it is beneficial that devices are able to adapt context. Hence, we propose to extend the definition of SASs and include context adaptation. This paper presents a taxonomy of self-adaptation and a survey on engineering SASs. Based on the taxonomy and the survey, we motivate a new perspective on SAS including context adaptation.", "title": "" }, { "docid": "6ccfe86f2a07dc01f87907855f6cb337", "text": "H istorically, retention of distance learners has been problematic with dropout rates disproportionably high compared to traditional course settings (Richards & Ridley, 1997; Wetzel, Radtke, & Stern, 1994). Dropout rates of 30 to 50% have been common (Moore & Kearsley, 1996). Students may experience feelings of isolation in distance courses compared to prior faceto-face educational experiences (Shaw & Polovina, 1999). If the distance courses feature limited contact with instructors and fellow students, the result of this isolation can be unfinished courses or degrees (Keegan, 1990). Student satisfaction in traditional learning environments has been overlooked in the past (Astin, 1993; DeBourgh, 1999; Navarro & Shoemaker, 2000). Student satisfaction has also not been given the proper attention in distance learning environments (Biner, Dean, & Mellinger, 1994). Richards and Ridley (1997) suggested further research is necessary to study factors affecting student enrollment and satisfaction. Prior studies in classroom-based courses have shown there is a high correlation between student satisfaction and retention (Astin, 1993; Edwards & Waters, 1982). This high correlation has also been found in studies in which distance learners were the target population (Bailey, Bauman, & Lata, 1998). The purpose of this study was to identify factors influencing student satisfaction in online courses, and to create and validate an instrument to measure student satisfaction in online courses.", "title": "" }, { "docid": "22b52198123909ff7b9a7d296eb88f7e", "text": "This paper addresses the problem of outdoor terrain modeling for the purposes of mobile robot navigation. We propose an approach in which a robot acquires a set of terrain models at differing resolutions. Our approach addresses one of the major shortcomings of Bayesian reasoning when applied to terrain modeling, namely artifacts that arise from the limited spatial resolution of robot perception. Limited spatial resolution causes small obstacles to be detectable only at close range. Hence, a Bayes filter estimating the state of terrain segments must consider the ranges at which that terrain is observed. We develop a multi-resolution approach that maintains multiple navigation maps, and derive rational arguments for the number of layers and their resolutions. We show that our approach yields significantly better results in a practical robot system, capable of acquiring detailed 3-D maps in large-scale outdoor environments.", "title": "" }, { "docid": "f670178ac943bbcc17978a0091159c7f", "text": "In this article, we present the first academic comparable corpus involving written French and French Sign Language. After explaining our initial motivation to build a parallel set of such data, especially in the context of our work on Sign Language modelling and our prospect of machine translation into Sign Language, we present the main problems posed when mixing language channels and modalities (oral, written, signed), discussing the translation-vs-interpretation narrative in particular. We describe the process followed to guarantee feature coverage and exploitable results despite a serious cost limitation, the data being collected from professional translations. We conclude with a few uses and prospects of the corpus.", "title": "" }, { "docid": "3f8e04d598c7c51779f6a2ff5c999c83", "text": "Class hierarchies are commonly used to reduce the complexity of the classification problem. This is crucial when dealing with a large number of categories. In this work, we evaluate class hierarchies currently constructed for visual recognition. We show that top-down as well as bottom-up approaches, which are commonly used to automatically construct hierarchies, incorporate assumptions about the separability of classes. Those assumptions do not hold for visual recognition of a large number of object categories. We therefore propose a modification which is appropriate for most top-down approaches. It allows to construct class hierarchies that postpone decisions in the presence of uncertainty and thus provide higher recognition accuracy. We also compare our method to a one-against-all approach and show how to control the speed-foraccuracy trade-off with our method. For the experimental evaluation, we use the Caltech-256 visual object classes dataset and compare to stateof-the-art methods.", "title": "" }, { "docid": "424ee79c6894196538f434ab18351346", "text": "Modern societies rely on efficient transportation systems for sustainable mobility. In this paper, we perform a large-scale and empirical evaluation of a dynamic and distributed taxi-sharing system. The novel system takes advantage of nowadays widespread availability of communication and computation to convey a cost-efficient, door-to-door and flexible system, offering a quality of service similar to traditional taxis. The shared taxi service is assessed in a real-city scenario using a highly realistic simulation platform. Simulation results have shown the system's advantages for both passengers and taxi drivers, and that trade-offs need to be considered. Compared with the current taxi operation model, results show a increase of 48% on the average occupancy per traveled kilometer with a full deployment of the taxi-sharing system.", "title": "" }, { "docid": "db8ed2e606bfbe6c48734c2fe6c57316", "text": "The ideas of frequency and predictability have played a fundamental role in models of human language processing for well over a hundred years (Schuchardt, 1885; Jespersen, 1922; Zipf, 1929; Martinet, 1960; Oldfield & Wingfield, 1965; Fidelholz, 1975; Jescheniak & Levelt, 1994; Bybee, 1996). While most psycholinguistic models have thus long included word frequency as a component, recent models have proposed more generally that probabilistic information about words, phrases, and other linguistic structure is represented in the minds of language users and plays a role in language comprehension (Jurafsky, 1996; MacDonald, 1993; McRae, Spivey-Knowlton, & Tanenhaus, 1998; Narayanan & Jurafsky, 1998; Trueswell & Tanenhaus, 1994) production (Gregory, Raymond, Bell, Fosler-Lussier, & Jurafsky, 1999; Roland & Jurafsky, 2000) and learning (Brent & Cartwright, 1996; Landauer & Dumais, 1997; Saffran, Aslin, & Newport, 1996; Seidenberg & MacDonald, 1999). In recent papers (Bell, Jurafsky, Fosler-Lussier, Girand, & Gildea, 1999; Gregory et al., 1999; Jurafsky, Bell, Fosler-Lussier, Girand, & Raymond, 1998), we have been studying the role of predictability and frequency in lexical production. Our goal is to understand the many factors that affect production variability as reflected in reduction processes such as vowel reduction, durational shortening, or final segmental deletion of words in spontaneous speech. One proposal that has resulted from this work is the Probabilistic Reduction Hypothesis: word forms are reduced when they have a higher probability. The probability of a word is conditioned on many aspects of its context, including neighboring words, syntactic and lexical structure, semantic expectations, and discourse factors. This proposal thus generalizes over earlier models which refer only to word frequency (Zipf, 1929; Fidelholz, 1975; Rhodes, 1992, 1996) or predictability (Fowler & Housum, 1987). In this paper we focus on a particular domain of probabilistic linguistic knowledge in lexical production: the role of local probabilistic relations between words.", "title": "" }, { "docid": "e5481c18acb0ccbf8cefb55da1b2a60a", "text": "Temporal database is a database which captures and maintains past, present and future data. Conventional databases are not suitable for handling such time varying data. In this context temporal database has gained a significant importance in the field of databases and data mining. The major objective of this research is to perform a detailed survey on temporal databases and the various temporal data mining techniques and explore the various research issues in temporal data mining. We also throw light on the temporal association rules and temporal clustering works carried in literature.", "title": "" }, { "docid": "2c75f9e3dbd8e38fc821e344679d72f1", "text": "This paper develops an MILP model, named Satisfactory-Green Vehicle Routing Problem. It consists of routing a heterogeneous fleet of vehicles in order to serve a set of customers within predefined time windows. In this model in addition to the traditional objective of the VRP, both the pollution and customers’ satisfaction have been taken into account. Meanwhile, the introduced model prepares an effective dashboard for decision-makers that determines appropriate routes, the best mixed fleet, speed and idle time of vehicles. Additionally, some new factors evaluate the greening of each decision based on three criteria. This model applies piecewise linear functions (PLFs) to linearize a nonlinear fuzzy interval for incorporating customers’ satisfaction into other linear objectives. We have presented a mixed integer linear programming formulation for the S-GVRP. This model enriches managerial insights by providing trade-offs between customers’ satisfaction, total costs and emission levels. Finally, we have provided a numerical study for showing the applicability of the model.", "title": "" }, { "docid": "553e476ad6a0081aed01775f995f4d16", "text": "This document describes the findings of the Second Workshop on Neural Machine Translation and Generation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2018). First, we summarize the research trends of papers presented in the proceedings, and note that there is particular interest in linguistic structure, domain adaptation, data augmentation, handling inadequate resources, and analysis of models. Second, we describe the results of the workshop’s shared task on efficient neural machine translation (NMT), where participants were tasked with creating NMT systems that are both accurate and efficient.", "title": "" }, { "docid": "8e071cfeaf33444e9f85f6bfcb8fa51b", "text": "BACKGROUND\nLutein is a carotenoid that may play a role in eye health. Human milk typically contains higher concentrations of lutein than infant formula. Preliminary data suggest there are differences in serum lutein concentrations between breastfed and formula-fed infants.\n\n\nAIM OF THE STUDY\nTo measure the serum lutein concentrations among infants fed human milk or formulas with and without added lutein.\n\n\nMETHODS\nA prospective, double-masked trial was conducted in healthy term formula-fed infants (n = 26) randomized between 9 and 16 days of age to study formulas containing 20 (unfortified), 45, 120, and 225 mcg/l of lutein. A breastfed reference group was studied (n = 14) and milk samples were collected from their mothers. Primary outcome was serum lutein concentration at week 12.\n\n\nRESULTS\nGeometric mean lutein concentration of human milk was 21.1 mcg/l (95% CI 14.9-30.0). At week 12, the human milk group had a sixfold higher geometric mean serum lutein (69.3 mcg/l; 95% CI 40.3-119) than the unfortified formula group (11.3 mcg/l; 95% CI 8.1-15.8). Mean serum lutein increased from baseline in each formula group except the unfortified group. Linear regression equation indicated breastfed infants had a greater increase in serum lutein (slope 3.7; P < 0.001) per unit increase in milk lutein than formula-fed infants (slope 0.9; P < 0.001).\n\n\nCONCLUSIONS\nBreastfed infants have higher mean serum lutein concentrations than infants who consume formula unfortified with lutein. These data suggest approximately 4 times more lutein is needed in infant formula than in human milk to achieve similar serum lutein concentrations among breastfed and formula fed infants.", "title": "" }, { "docid": "baed3d522bfd5d56401bfac48e8c51a2", "text": "Mobile malware attempts to evade detection during app analysis by mimicking security-sensitive behaviors of benign apps that provide similar functionality (e.g., sending SMS messages), and suppressing their payload to reduce the chance of being observed (e.g., executing only its payload at night). Since current approaches focus their analyses on the types of security-sensitive resources being accessed (e.g., network), these evasive techniques in malware make differentiating between malicious and benign app behaviors a difficult task during app analysis. We propose that the malicious and benign behaviors within apps can be differentiated based on the contexts that trigger security-sensitive behaviors, i.e., the events and conditions that cause the security-sensitive behaviors to occur. In this work, we introduce AppContext, an approach of static program analysis that extracts the contexts of security-sensitive behaviors to assist app analysis in differentiating between malicious and benign behaviors. We implement a prototype of AppContext and evaluate AppContext on 202 malicious apps from various malware datasets, and 633 benign apps from the Google Play Store. AppContext correctly identifies 192 malicious apps with 87.7% precision and 95% recall. Our evaluation results suggest that the maliciousness of a security-sensitive behavior is more closely related to the intention of the behavior (reflected via contexts) than the type of the security-sensitive resources that the behavior accesses.", "title": "" } ]
scidocsrr
940cd05eb09f3aa85e0a63e79bcb338c
Proactive Coping and its Relation to the Five-Factor Model of Personality
[ { "docid": "281bcb92dfaae0dc541ef0b7b8db2d72", "text": "In 3 studies, the authors investigated the functional role of psychological resilience and positive emotions in the stress process. Studies 1a and 1b explored naturally occurring daily stressors. Study 2 examined data from a sample of recently bereaved widows. Across studies, multilevel random coefficient modeling analyses revealed that the occurrence of daily positive emotions serves to moderate stress reactivity and mediate stress recovery. Findings also indicated that differences in psychological resilience accounted for meaningful variation in daily emotional responses to stress. Higher levels of trait resilience predicted a weaker association between positive and negative emotions, particularly on days characterized by heightened stress. Finally, findings indicated that over time, the experience of positive emotions functions to assist high-resilient individuals in their ability to recover effectively from daily stress. Implications for research into protective factors that serve to inhibit the scope, severity, and diffusion of daily stressors in later adulthood are discussed.", "title": "" }, { "docid": "6c29473469f392079fa8406419190116", "text": "The five-factor model of personality is a hierarchical organization of personality traits in terms of five basic dimensions: Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. Research using both natural language adjectives and theoretically based personality questionnaires supports the comprehensiveness of the model and its applicability across observers and cultures. This article summarizes the history of the model and its supporting evidence; discusses conceptions of the nature of the factors; and outlines an agenda for theorizing about the origins and operation of the factors. We argue that the model should prove useful both for individual assessment and for the elucidation of a number of topics of interest to personality psychologists.", "title": "" } ]
[ { "docid": "0d81a7af3c94e054841e12d4364b448c", "text": "Internet of Things (IoT) is characterized by heterogeneous technologies, which concur to the provisioning of innovative services in various application domains. In this scenario, the satisfaction of security and privacy requirements plays a fundamental role. Such requirements include data confidentiality and authentication, access control within the IoT network, privacy and trust among users and things, and the enforcement of security and privacy policies. Traditional security countermeasures cannot be directly applied to IoT technologies due to the different standards and communication stacks involved. Moreover, the high number of interconnected devices arises scalability issues; therefore a flexible infrastructure is needed able to deal with security threats in such a dynamic environment. In this survey we present the main research challenges and the existing solutions in the field of IoT security, identifying open issues, and suggesting some hints for future research. During the last decade, Internet of Things (IoT) approached our lives silently and gradually, thanks to the availability of wireless communication systems (e.g., RFID, WiFi, 4G, IEEE 802.15.x), which have been increasingly employed as technology driver for crucial smart monitoring and control applications [1–3]. Nowadays, the concept of IoT is many-folded, it embraces many different technologies, services, and standards and it is widely perceived as the angular stone of the ICT market in the next ten years, at least [4–6]. From a logical viewpoint, an IoT system can be depicted as a collection of smart devices that interact on a collabo-rative basis to fulfill a common goal. At the technological floor, IoT deployments may adopt different processing and communication architectures, technologies, and design methodologies, based on their target. For instance, the same IoT system could leverage the capabilities of a wireless sensor network (WSN) that collects the environmental information in a given area and a set of smartphones on top of which monitoring applications run. In the middle, a standardized or proprietary middle-ware could be employed to ease the access to virtualized resources and services. The middleware, in turn, might be implemented using cloud technologies, centralized overlays , or peer to peer systems [7]. Of course, this high level of heterogeneity, coupled to the wide scale of IoT systems, is expected to magnify security threats of the current Internet, which is being increasingly used to let interact humans, machines, and robots, in any combination. More in details, traditional security countermeasures and privacy enforcement cannot be directly applied to IoT technologies due to …", "title": "" }, { "docid": "cc379f31d87bce8ec46829f227458059", "text": "In this paper we exemplify how information visualization supports speculative thinking, hypotheses testing, and preliminary interpretation processes as part of literary research. While InfoVis has become a buzz topic in the digital humanities, skepticism remains about how effectively it integrates into and expands on traditional humanities research approaches. From an InfoVis perspective, we lack case studies that show the specific design challenges that make literary studies and humanities research at large a unique application area for information visualization. We examine these questions through our case study of the Speculative W@nderverse, a visualization tool that was designed to enable the analysis and exploration of an untapped literary collection consisting of thousands of science fiction short stories. We present the results of two empirical studies that involved general-interest readers and literary scholars who used the evolving visualization prototype as part of their research for over a year. Our findings suggest a design space for visualizing literary collections that is defined by (1) their academic and public relevance, (2) the tension between qualitative vs. quantitative methods of interpretation, (3) result-vs. process-driven approaches to InfoVis, and (4) the unique material and visual qualities of cultural collections. Through the Speculative W@nderverse we demonstrate how visualization can bridge these sometimes contradictory perspectives by cultivating curiosity and providing entry points into literary collections while, at the same time, supporting multiple aspects of humanities research processes.", "title": "" }, { "docid": "8c0f20061bd09b328748d256d5ece7cc", "text": "Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several interesting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50, 000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.", "title": "" }, { "docid": "357e03d12dc50cf5ce27cadd50ac99fa", "text": "This paper presents a linear solution for reconstructing the 3D trajectory of a moving point from its correspondence in a collection of 2D perspective images, given the 3D spatial pose and time of capture of the cameras that produced each image. Triangulation-based solutions do not apply, as multiple views of the point may not exist at each instant in time. A geometric analysis of the problem is presented and a criterion, called reconstructibility, is defined to precisely characterize the cases when reconstruction is possible, and how accurate it can be. We apply the linear reconstruction algorithm to reconstruct the time evolving 3D structure of several real-world scenes, given a collection of non-coincidental 2D images.", "title": "" }, { "docid": "b5beb47957acfaa6ab44a5a65b729793", "text": "In developing technology for indoor localization, we have recently begun exploring commercially available state of the art localization technologies. The DecaWave DW1000 is a new ultra-wideband transceiver that advertises high-precision indoor pairwise ranging between modules with errors as low as 10 cm. We are currently exploring this technology to automate obtaining anchor ground-truth locations for other indoor localization systems. Anchor positioning is a constrained version of indoor localization, with minimal time constraints and static devices. However, as we intend to include the DW1000 hardware on our own localization system, this provides an opportunity for gathering performance data for a commerciallyenabled localization system deployed by a third-party for comparison purposes. We do not claim the ranging hardware as our original work, but we do provide a hardware implementation, an infrastructure for converting pairwise measurements to locations, and the front-end for viewing the results.", "title": "" }, { "docid": "01ea3bf8f7694f76b486265edbdeb834", "text": "We deepen and extend resource-level theorizing about sustainable competitive advantage by developing a formal model of resource development in competitive markets. Our model incorporates three important barriers to imitation: time compression diseconomies, causal ambiguity and the magnitude of ...xed investments. Time compression diseconomies are derived from a micro-model of resource development with diminishing returns to effort. We characterize two dimensions of sustainability: whether a resource is imitable and how long imitation takes. We identify conditions under which competitive advantage does not lead to superior performance and show that an imitator can sometimes bene...t from increases in causal ambiguity. Despite recent criticisms, we rea¢rm the usefulness of a resource-level of analysis, especially when the focus is on resources developed through internal projects with identi...able stopping times.", "title": "" }, { "docid": "74808d33cffabf89e7f6c4f97565f486", "text": "Multimedia data security is becoming important with the continuous increase of digital communication on the internet. Without having privacy of data there is no meaning of doing communication using extremely high end technologies. Data encryption is suitable method to protect data, where as steganography is the process of hiding secret information inside some carrier. This paper focus on utilization of digital video/images as a cover to hide data and for insisting more security encryption is done with steganography. In the proposed method encrypting message image with ECC and hiding encrypted image using LSB within cover video. It gives a high level of authentication, security and resistance against extraction by attacker. As ECC offer better security with smaller key sizes, results in faster computation , lower power consumption as well as memory and bandwidth saving.", "title": "" }, { "docid": "08804b3859d70c6212bba05c7e792f9a", "text": "Both linear mixed models (LMMs) and sparse regression models are widely used in genetics applications, including, recently, polygenic modeling in genome-wide association studies. These two approaches make very different assumptions, so are expected to perform well in different situations. However, in practice, for a given dataset one typically does not know which assumptions will be more accurate. Motivated by this, we consider a hybrid of the two, which we refer to as a \"Bayesian sparse linear mixed model\" (BSLMM) that includes both these models as special cases. We address several key computational and statistical issues that arise when applying BSLMM, including appropriate prior specification for the hyper-parameters and a novel Markov chain Monte Carlo algorithm for posterior inference. We apply BSLMM and compare it with other methods for two polygenic modeling applications: estimating the proportion of variance in phenotypes explained (PVE) by available genotypes, and phenotype (or breeding value) prediction. For PVE estimation, we demonstrate that BSLMM combines the advantages of both standard LMMs and sparse regression modeling. For phenotype prediction it considerably outperforms either of the other two methods, as well as several other large-scale regression methods previously suggested for this problem. Software implementing our method is freely available from http://stephenslab.uchicago.edu/software.html.", "title": "" }, { "docid": "b4ed57258b85ab4d81d5071fc7ad2cc9", "text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.", "title": "" }, { "docid": "455bad2a024c2e15a1aec6b8472e2ef4", "text": "In this contribution we present a probabilistic fusion framework for implementing a sensor independent measurement fusion. All interfaces are using probabilistic descriptions of measurement and existence uncertainties. We introduce several extensions to already existing algorithms: the support for association of multiple measurements to the same object is introduced, which reduces the effects of split segments in the data preprocessing step of high-resolution sensors like laser scanners. Furthermore, we present an approach for integrating explicit object birth models. We also developed extensions to speed up the algorithm which lead to real-time performance with fragmented data. We show the application of the framework in an automotive multi-target multi-sensor environment by fusing laser scanner and video. The algorithms were evaluated using real-world data in our research vehicle.", "title": "" }, { "docid": "d0ffe432e19d9039a95aed4146b55b61", "text": "While dynamic malware analysis methods generally provide better precision than purely static methods, they have the key drawback that they can only detect malicious behavior if it is executed during analysis. This requires inputs that trigger the malicious behavior to be applied during execution. All current methods, such as hard-coded tests, random fuzzing and concolic testing, can provide good coverage but are inefficient because they are unaware of the specific capabilities of the dynamic analysis tool. In this work, we introduce IntelliDroid, a generic Android input generator that can be configured to produce inputs specific to a dynamic analysis tool, for the analysis of any Android application. Furthermore, IntelliDroid is capable of determining the precise order that the inputs must be injected, and injects them at what we call the device-framework interface such that system fidelity is preserved. This enables it to be paired with full-system dynamic analysis tools such as TaintDroid. Our experiments demonstrate that IntelliDroid requires an average of 72 inputs and only needs to execute an average of 5% of the application to detect malicious behavior. When evaluated on 75 instances of malicious behavior, IntelliDroid successfully identifies the behavior, extracts path constraints, and executes the malicious code in all but 5 cases. On average, IntelliDroid performs these tasks in 138.4 seconds per application.", "title": "" }, { "docid": "8ccd1dfb75523c296508453b5a557384", "text": "It has long been considered a significant problem to improve the visual quality of lossy imageand video compression. Recent advances in computing power together with the availabilityof large training data sets has increased interest in the application of deep learning cnnsto address image recognition and image processing tasks. Here, we present a powerful cnntailored to the specific task of semantic image understanding to achieve higher visual qualityin lossy compression. A modest increase in complexity is incorporated to the encoder whichallows a standard, off-the-shelf jpeg decoder to be used. While jpeg encoding may beoptimized for generic images, the process is ultimately unaware of the specific content ofthe image to be compressed. Our technique makes jpeg content-aware by designing andtraining a model to identify multiple semantic regions in a given image. Unlike objectdetection techniques, our model does not require labeling of object positions and is able toidentify objects in a single pass. We present a new cnn architecture directed specifically toimage compression, which generates a map that highlights semantically-salient regions sothat they can be encoded at higher quality as compared to background regions. By addinga complete set of features for every class, and then taking a threshold over the sum of allfeature activations, we generate a map that highlights semantically-salient regions so thatthey can be encoded at a better quality compared to background regions. Experimentsare presented on the Kodak PhotoCD dataset and the MIT Saliency Benchmark dataset, in which our algorithm achieves higher visual quality for the same compressed size whilepreserving PSNR1.", "title": "" }, { "docid": "ec2eb33d3bf01df406409a31cc0a0e1f", "text": "Brain graphs provide a relatively simple and increasingly popular way of modeling the human brain connectome, using graph theory to abstractly define a nervous system as a set of nodes (denoting anatomical regions or recording electrodes) and interconnecting edges (denoting structural or functional connections). Topological and geometrical properties of these graphs can be measured and compared to random graphs and to graphs derived from other neuroscience data or other (nonneural) complex systems. Both structural and functional human brain graphs have consistently demonstrated key topological properties such as small-worldness, modularity, and heterogeneous degree distributions. Brain graphs are also physically embedded so as to nearly minimize wiring cost, a key geometric property. Here we offer a conceptual review and methodological guide to graphical analysis of human neuroimaging data, with an emphasis on some of the key assumptions, issues, and trade-offs facing the investigator.", "title": "" }, { "docid": "4b9df4116960cd3e3300d87e4f97e1e9", "text": "Large data collections required for the training of neural networks often contain sensitive information such as the medical histories of patients, and the privacy of the training data must be preserved. In this paper, we introduce a dropout technique that provides an elegant Bayesian interpretation to dropout, and show that the intrinsic noise added, with the primary goal of regularization, can be exploited to obtain a degree of differential privacy. The iterative nature of training neural networks presents a challenge for privacy-preserving estimation since multiple iterations increase the amount of noise added. We overcome this by using a relaxed notion of differential privacy, called concentrated differential privacy, which provides tighter estimates on the overall privacy loss. We demonstrate the accuracy of our privacy-preserving dropout algorithm on benchmark datasets.", "title": "" }, { "docid": "bc11f3de3037b0098a6c313d879ae696", "text": "The study of polygon meshes is a large sub-field of computer graphics and geometric modeling. Different representations of polygon meshes are used for different applications and goals. The variety of operations performed on meshes may include boolean logic, smoothing, simplification, and many others. 2.3.1 What is a mesh? A mesh is a collection of polygonal facets targeting to constitute an appropriate approximation of a real 3D object. It possesses three different combinatorial elements: vertices, edges and facets. From another viewpoint, a mesh can also be completely described by two kinds of information. The geometry information gives essentially the positions (coordinates) of all its vertices, while the connectivity information provides the adjacency relations between the different elements. 2.3.2 An example of 3D meshes As we can see in the Fig. 2.3, the facets usually consist of triangles, quadrilaterals or other simple convex polygons, since this simplifies rendering, but may also be composed of more general concave polygons, or polygons with holes. The degree of a facet is the number of its component edges, and the valence of a vertex is defined as the number of its incident edges. 2.3.3 Classification of structures Polygon meshes may be represented in a variety of structures, using different methods to store the vertex, edge and face data. In general they include/", "title": "" }, { "docid": "befbfb5b083cddb7fb43ebaa8df244c1", "text": "The aim of this study was to adapt and validate the Spanish version of the Sport Motivation Scale-II (S-SMS-II) in adolescent athletes. The sample included 766 Spanish adolescents (263 females and 503 males; average age = 13.71 ± 1.30 years old). The methodological steps established by the International Test Commission were followed. Four measurement models were compared employing the maximum likelihood estimation (with six, five, three, and two factors). Then, factorial invariance analyses were conducted and the effect sizes were calculated. Finally, the reliability was calculated using Cronbach's alpha, omega, and average variance extracted coefficients. The five-factor S-SMS-II showed the best indices of fit (Cronbach's alpha .64 to .74; goodness of fit index .971, root mean square error of approximation .044, comparative fit index .966). Factorial invariance was also verified across gender and between sport-federated athletes and non-federated athletes. The proposed S-SMS-II is discussed according to previous validated versions (English, Portuguese, and Chinese).", "title": "" }, { "docid": "7f067f869481f06e865880e1d529adc8", "text": "Distributed Denial of Service (DDoS) is defined as an attack in which mutiple compromised systems are made to attack a single target to make the services unavailable foe legitimate users.It is an attack designed to render a computer or network incapable of providing normal services. DDoS attack uses many compromised intermediate systems, known as botnets which are remotely controlled by an attacker to launch these attacks. DDOS attack basically results in the situation where an entity cannot perform an action for which it is authenticated. This usually means that a legitimate node on the network is unable to reach another node or their performance is degraded. The high interruption and severance caused by DDoS is really posing an immense threat to entire internet world today. Any compromiseto computing, communication and server resources such as sockets, CPU, memory, disk/database bandwidth, I/O bandwidth, router processing etc. for collaborative environment would surely endanger the entire application. It becomes necessary for researchers and developers to understand behaviour of DDoSattack because it affects the target network with little or no advance warning. Hence developing advanced intrusion detection and prevention systems for preventing, detecting, and responding to DDOS attack is a critical need for cyber space. Our rigorous survey study presented in this paper describes a platform for the study of evolution of DDoS attacks and their defense mechanisms.", "title": "" }, { "docid": "093b6b75b34799a1920e27ef8f02595d", "text": "Logistic Regression is a well-known classification method that has been used widely in many applications of data mining, machine learning, computer vision, and bioinformatics. Sparse logistic regression embeds feature selection in the classification framework using the l1-norm regularization, and is attractive in many applications involving high-dimensional data. In this paper, we propose Lassplore for solving large-scale sparse logistic regression. Specifically, we formulate the problem as the l1-ball constrained smooth convex optimization, and propose to solve the problem using the Nesterov's method, an optimal first-order black-box method for smooth convex optimization. One of the critical issues in the use of the Nesterov's method is the estimation of the step size at each of the optimization iterations. Previous approaches either applies the constant step size which assumes that the Lipschitz gradient is known in advance, or requires a sequence of decreasing step size which leads to slow convergence in practice. In this paper, we propose an adaptive line search scheme which allows to tune the step size adaptively and meanwhile guarantees the optimal convergence rate. Empirical comparisons with several state-of-the-art algorithms demonstrate the efficiency of the proposed Lassplore algorithm for large-scale problems.", "title": "" }, { "docid": "88968e939e9586666c83c13d4f640717", "text": "The economics of two-sided markets or multi-sided platforms has emerged over the past decade as one of the most active areas of research in economics and strategy. The literature has constantly struggled, however, with a lack of agreement on a proper definition: for instance, some existing definitions imply that retail firms such as grocers, supermarkets and department stores are multi-sided platforms (MSPs). We propose a definition which provides a more precise notion of MSPs by requiring that they enable direct interactions between the multiple customer types which are affiliated to them. Several important implications of this new definition are derived. First, cross-group network effects are neither necessary nor sufficient for an organization to be a MSP. Second, our definition emphasizes the difference between MSPs and alternative forms of intermediation such as “re-sellers” which take control over the interactions between the various sides, or input suppliers which have only one customer group affiliated as opposed to multiple. We discuss a number of examples that illustrate the insights that can be derived by applying our definition. Third, we point to the economic considerations that determine where firms choose to position themselves on the continuum between MSPs and resellers, or MSPs and input suppliers. 1 Britta Kelley provided excellent research assistance. We are grateful to Elizabeth Altman, Tom Eisenmann and Marc Rysman for comments on an earlier draft. 2 Harvard University, ahagiu@hbs.edu. 3 National University of Singapore, jwright@nus.edu.sg.", "title": "" }, { "docid": "4239773a9ef4636f4dd8e084b658a6bc", "text": "Alternative splicing and alternative polyadenylation (APA) of pre-mRNAs greatly contribute to transcriptome diversity, coding capacity of a genome and gene regulatory mechanisms in eukaryotes. Second-generation sequencing technologies have been extensively used to analyse transcriptomes. However, a major limitation of short-read data is that it is difficult to accurately predict full-length splice isoforms. Here we sequenced the sorghum transcriptome using Pacific Biosciences single-molecule real-time long-read isoform sequencing and developed a pipeline called TAPIS (Transcriptome Analysis Pipeline for Isoform Sequencing) to identify full-length splice isoforms and APA sites. Our analysis reveals transcriptome-wide full-length isoforms at an unprecedented scale with over 11,000 novel splice isoforms. Additionally, we uncover APA of ∼11,000 expressed genes and more than 2,100 novel genes. These results greatly enhance sorghum gene annotations and aid in studying gene regulation in this important bioenergy crop. The TAPIS pipeline will serve as a useful tool to analyse Iso-Seq data from any organism.", "title": "" } ]
scidocsrr
57bdc835f025c6dba6e67ae55c7254cd
Polymorphic malware detection using sequence classification methods and ensembles
[ { "docid": "b37de4587fbadad9258c1c063b03a07a", "text": "Numerous attacks, such as worms, phishing, and botnets, threaten the availability of the Internet, the integrity of its hosts, and the privacy of its users. A core element of defense against these attacks is anti-virus(AV)–a service that detects, removes, and characterizes these threats. The ability of these products to successfully characterize these threats has far-reaching effects—from facilitating sharing across organizations, to detecting the emergence of new threats, and assessing risk in quarantine and cleanup. In this paper, we examine the ability of existing host-based anti-virus products to provide semantically meaningful information about the malicious software and tools (or malware) used by attackers. Using a large, recent collection of malware that spans a variety of attack vectors (e.g., spyware, worms, spam), we show that different AV products characterize malware in ways that are inconsistent across AV products, incomplete across malware, and that fail to be concise in their semantics. To address these limitations, we propose a new classification technique that describes malware behavior in terms of system state changes (e.g., files written, processes created) rather than in sequences or patterns of system calls. To address the sheer volume of malware and diversity of its behavior, we provide a method for automatically categorizing these profiles of malware into groups that reflect similar classes of behaviors and demonstrate how behavior-based clustering provides a more direct and effective way of classifying and analyzing Internet malware.", "title": "" }, { "docid": "252f4bcaeb5612a3018578ec2008dd71", "text": "Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/ .", "title": "" } ]
[ { "docid": "5cea0630252f2d36c849be957503944e", "text": "In this paper, we propose an efficient in-DBMS solution for the problem of sub-trajectory clustering and outlier detection in large moving object datasets. The method relies on a two-phase process: a voting-and-segmentation phase that segments trajectories according to a local density criterion and trajectory similarity criteria, followed by a sampling-and-clustering phase that selects the most representative sub-trajectories to be used as seeds for the clustering process. Our proposal, called STClustering (for Sampling-based Sub-Trajectory Clustering) is novel since it is the first, to our knowledge, that addresses the pure spatiotemporal sub-trajectory clustering and outlier detection problem in a real-world setting (by ‘pure’ we mean that the entire spatiotemporal information of trajectories is taken into consideration). Moreover, our proposal can be efficiently registered as a database query operator in the context of extensible DBMS (namely, PostgreSQL in our current implementation). The effectiveness and the efficiency of the proposed algorithm are experimentally validated over synthetic and real-world trajectory datasets, demonstrating that STClustering outperforms an off-the-shelf in-DBMS solution using PostGIS by several orders of magnitude. CCS Concepts • Information systems ➝ Information systems applications ➝ Data mining ➝ Clustering • Information systems ➝ Information systems applications ➝ Spatio-temporal systems", "title": "" }, { "docid": "ca26daaa9961f7ba2343ae84245c1181", "text": "In a recently held WHO workshop it has been recommended to abandon the distinction between potentially malignant lesions and potentially malignant conditions and to use the term potentially malignant disorders instead. Of these disorders, leukoplakia and erythroplakia are the most common ones. These diagnoses are still defined by exclusion of other known white or red lesions. In spite of tremendous progress in the field of molecular biology there is yet no single marker that reliably enables to predict malignant transformation in an individual patient. The general advice is to excise or laser any oral of oropharyngeal leukoplakia/erythroplakia, if feasible, irrespective of the presence or absence of dysplasia. Nevertheless, it is actually unknown whether such removal truly prevents the possible development of a squamous cell carcinoma. At present, oral lichen planus seems to be accepted in the literature as being a potentially malignant disorder, although the risk of malignant transformation is lower than in leukoplakia. There are no means to prevent such event. The efficacy of follow-up of oral lichen planus is questionable. Finally, brief attention has been paid to oral submucous fibrosis, actinic cheilitis, some inherited cancer syndromes and immunodeficiency in relation to cancer predisposition.", "title": "" }, { "docid": "fbcf9ddf08fc14c4551a82653d53963d", "text": "Non-normal data and heteroscedasticity are two common problems encountered when dealing with testing for location measures. Non-normality exists either from the shape of the distributions or by the presence of outliers. Outliers occur when there exist data values that are very different from the majority of cases in the data set. Outliers are important because they can influence the results of the data analysis. This paper demonstrated the detection of outliers by using robust scale estimators such as MADn, Tn and LMSn as trimming criteria. These criteria will trim extreme values without prior determination of trimming percentage. Sample data was used in this study to illustrate how extreme values are removed by these trimming criteria. We will present how these were done in a SAS program.", "title": "" }, { "docid": "20d95255d3cf72174cbdc6f8614796a5", "text": "This paper gives a review of the recent developments in deep learning and unsupervised feature learning for time-series problems. While these techniques have shown promise for modeling static data, such as computer vision, applying them to time-series data is gaining increasing attention. This paper overviews the particular challenges present in time-series data and provides a review of the works that have either applied time-series data to unsupervised feature learning algorithms or alternatively have contributed to modi cations of feature learning algorithms to take into account the challenges present in time-series data.", "title": "" }, { "docid": "8eb96feea999ce77f2b56b7941af2587", "text": "The term cyber security is often used interchangeably with the term information security. This paper argues that, although there is a substantial overlap between cyber security and information security, these two concepts are not totally analogous. Moreover, the paper posits that cyber security goes beyond the boundaries of traditional information security to include not only the protection of information resources, but also that of other assets, including the person him/herself. In information security, reference to the human factor usually relates to the role(s) of humans in the security process. In cyber security this factor has an additional dimension, namely, the humans as potential targets of cyber attacks or even unknowingly participating in a cyber attack. This additional dimension has ethical implications for society as a whole, since the protection of certain vulnerable groups, for example children, could be seen as a societal responsibility. a 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3a322129019eed67686018404366fe0b", "text": "Scientists and casual users need better ways to query RDF databases or Linked Open Data. Using the SPARQL query language requires not only mastering its syntax and semantics but also understanding the RDF data model, the ontology used, and URIs for entities of interest. Natural language query systems are a powerful approach, but current techniques are brittle in addressing the ambiguity and complexity of natural language and require expensive labor to supply the extensive domain knowledge they need. We introduce a compromise in which users give a graphical \"skeleton\" for a query and annotates it with freely chosen words, phrases and entity names. We describe a framework for interpreting these \"schema-agnostic queries\" over open domain RDF data that automatically translates them to SPARQL queries. The framework uses semantic textual similarity to find mapping candidates and uses statistical approaches to learn domain knowledge for disambiguation, thus avoiding expensive human efforts required by natural language interface systems. We demonstrate the feasibility of the approach with an implementation that performs well in an evaluation on DBpedia data.", "title": "" }, { "docid": "8bf5f5e332159674389d2026514fbc15", "text": "This project examines the nature of password cracking and modern applications. Several applications for different platforms are studied. Different methods of cracking are explained, including dictionary attack, brute force, and rainbow tables. Password cracking across different mediums is examined. Hashing and how it affects password cracking is discussed. An implementation of two hash-based password cracking algorithms is developed, along with experimental results of their efficiency.", "title": "" }, { "docid": "54ca6cb3e71574fc741c3181b8a4871c", "text": "Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.", "title": "" }, { "docid": "77cf780ce8b2c7b6de57c83f6b724dba", "text": "BACKGROUND\nAlthough there are several case reports of facial skin ischemia/necrosis caused by hyaluronic acid filler injections, no systematic study of the clinical outcomes of a series of cases with this complication has been reported.\n\n\nMETHODS\nThe authors report a study of 20 consecutive patients who developed impending nasal skin necrosis as a primary concern, after nose and/or nasolabial fold augmentation with hyaluronic acid fillers. The authors retrospectively reviewed the clinical outcomes and the risk factors for this complication using case-control analysis.\n\n\nRESULTS\nSeven patients (35 percent) developed full skin necrosis, and 13 patients (65 percent) recovered fully after combination treatment with hyaluronidase. Although the two groups had similar age, sex, filler injection sites, and treatment for the complication, 85 percent of the patients in the full skin necrosis group were late presenters who did not receive the combination treatment with hyaluronidase within 2 days after the vascular complication first appeared. In contrast, just 15 percent of the patients in the full recovery group were late presenters (p = 0.004).\n\n\nCONCLUSIONS\nNose and nasolabial fold augmentations with hyaluronic acid fillers can lead to impending nasal skin necrosis, possibly caused by intravascular embolism and/or extravascular compression. The key for preventing the skin ischemia from progressing to necrosis is to identify and treat the ischemia as early as possible. Early (<2 days) combination treatment with hyaluronidase is associated with the full resolution of the complication.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, IV.", "title": "" }, { "docid": "52e492ff5e057a8268fd67eb515514fe", "text": "We present a long-range passive (battery-free) radio frequency identification (RFID) and distributed sensing system using a single wire transmission line (SWTL) as the communication channel. A SWTL exploits guided surface wave propagation along a single conductor, which can be formed from existing infrastructure, such as power lines, pipes, or steel cables. Guided propagation along a SWTL has far lower losses than a comparable over-the-air (OTA) communication link; so much longer read distances can be achieved compared with the conventional OTA RFID system. In a laboratory-scale experiment with an ISO18000–6C (EPC Gen 2) passive tag, we demonstrate an RFID system using an 8 mm diameter, 5.2 m long SWTL. This SWTL has 30 dB lower propagation loss than a standard OTA RFID system at the same read range. We further demonstrate that the SWTL can tolerate extreme temperatures far beyond the capabilities of coaxial cable, by heating an operating SWTL conductor with a propane torch having a temperature of nearly 2000 °C. Extrapolation from the measured results suggest that a SWTL-based RFID system is capable of read ranges of over 70 m assuming a reader output power of +32.5 dBm and a tag power-up threshold of −7 dBm.", "title": "" }, { "docid": "f2377c76df4a2bcf0af063cb86befdda", "text": "Overexpression of ErbB2, a receptor-like tyrosine kinase, is shared by several types of human carcinomas. In breast tumors the extent of overexpression has a prognostic value, thus identifying the oncoprotein as a target for therapeutic strategies. Already, antibodies to ErbB2 are used in combination with chemotherapy in the treatment of metastasizing breast cancer. The mechanisms underlying the oncogenic action of ErbB2 involve a complex network in which ErbB2 acts as a ligand-less signaling subunit of three other receptors that directly bind a large repertoire of stroma-derived growth factors. The major partners of ErbB2 in carcinomas are ErbB1 (also called EGFR) and ErbB3, a kinase-defective receptor whose potent mitogenic action is activated in the context of heterodimeric complexes. Why ErbB2-containing heterodimers are relatively oncopotent is a function of a number of processes. Apparently, these heterodimers evade normal inactivation processes, by decreasing the rate of ligand dissociation, internalizing relatively slowly and avoiding the degradative pathway by returning to the cell surface. On the other hand, the heterodimers strongly recruit survival and mitogenic pathways such as the mitogen-activated protein kinases and the phosphatidylinositol 3-kinase. Hyper-activated signaling through the ErbB-signaling network results in dysregulation of the cell cycle homeostatic machinery, with upregulation of active cyclin-D/CDK complexes. Recent data indicate that cell cycle regulators are also linked to chemoresistance in ErbB2-dependent breast carcinoma. Together with D-type cyclins, it seems that the CDK inhibitor p21Waf1 plays an important role in evasion from apoptosis. These recent findings herald a preliminary understanding of the output layer which connects elevated ErbB-signaling to oncogenesis and chemoresistance.", "title": "" }, { "docid": "b009c2b4cc62f7cc430deb671de4a192", "text": "Electric vehicles are gaining importance and help to reduce dependency on oil, increase energy efficiency of transportation, reduce carbon emissions and noise, and avoid tail pipe emissions. Because of short driving distances, high mileages, and intermediate waiting times, fossil-fuelled taxi vehicles are ideal candidates for being replaced by battery electric vehicles (BEVs). Moreover, taxis as BEVs would increase visibility of electric mobility and therefore encourage others to purchase an electric vehicle. Prior to replacing conventional taxis with BEVs, a suitable charging infrastructure has to be established. This infrastructure, which is a prerequisite for the use of BEVs in practice, consists of a sufficiently dense network of charging stations taking into account the lower driving ranges of BEVs. In this case study we propose a decision support system for placing charging stations to satisfy the charging demand of electric taxi vehicles. Operational taxi data from about 800 vehicles is used to identify and estimate the charging demand for electric taxis based on frequent origins and destinations of trips. Next, a variant of the maximal covering location problem is formulated and solved, aiming at satisfying as much charging demand as possible with a limited number of charging stations. Already existing fast charging locations are considered in the optimization problem. In this work, we focus on finding regions in which charging stations should be placed, rather than exact locations. The exact location within an area is identified in a post-optimization phase (e.g., by authorities), where environmental conditions are considered, e.g., the capacity of the power network, availability of space, and legal issues. Our approach is implemented in the city of Vienna, Austria, in the course of an applied research project conducted in 2014. Local authorities, power network operators, representatives of taxi driver guilds as well as a radio taxi provider participated in the project and identified exact locations for charging stations based on our decision support system. ∗Corresponding author Email addresses: johannes.asamer@ait.ac.at (Johannes Asamer), martin.reinthaler@ait.ac.at (Martin Reinthaler), mario.ruthmair@univie.ac.at (Mario Ruthmair), markus.straub@ait.ac.at (Markus Straub), jakob.puchinger@centralesupelec.fr (Jakob Puchinger) Preprint submitted to Elsevier November 6, 2015", "title": "" }, { "docid": "51db8011d3dfd60b7808abc6868f7354", "text": "Security issue in cloud environment is one of the major obstacle in cloud implementation. Network attacks make use of the vulnerability in the network and the protocol to damage the data and application. Cloud follows distributed technology; hence it is vulnerable for intrusions by malicious entities. Intrusion detection systems (IDS) has become a basic component in network protection infrastructure and a necessary method to defend systems from various attacks. Distributed denial of service (DDoS) attacks are a great problem for a user of computers linked to the Internet. Data mining techniques are widely used in IDS to identify attacks using the network traffic. This paper presents and evaluates a Radial basis function neural network (RBF-NN) detector to identify DDoS attacks. Many of the training algorithms for RBF-NNs start with a predetermined structure of the network that is selected either by means of a priori knowledge or depending on prior experience. The resultant network is frequently inadequate or needlessly intricate and a suitable network structure could be configured only by trial and error method. This paper proposes Bat algorithm (BA) to configure RBF-NN automatically. Simulation results demonstrate the effectiveness of the proposed method.", "title": "" }, { "docid": "6eebd82e4d2fe02e9b26190638e9d159", "text": "Agile development methodologies have been gaining acceptance in the mainstream software development community. While there are numerous studies of agile development in academic and educational settings, there has been little detailed reporting of the usage, penetration and success of agile methodologies in traditional, professional software development organizations. We report on the results of an empirical study conducted at Microsoft to learn about agile development and its perception by people in development, testing, and management. We found that one-third of the study respondents use agile methodologies to varying degrees, and most view it favorably due to improved communication between team members, quick releases and the increased flexibility of agile designs. The scrum variant of agile methodologies is by far the most popular at Microsoft. Our findings also indicate that developers are most worried about scaling agile to larger projects (greater than twenty members), attending too many meetings and the coordinating agile and non-agile teams.", "title": "" }, { "docid": "e830098f9c045d376177e6d2644d4a06", "text": "OBJECTIVE\nTo determine whether acetyl-L-carnitine (ALC), a metabolite necessary for energy metabolism and essential fatty acid anabolism, might help attention-deficit/hyperactivity disorder (ADHD). Trials in Down's syndrome, migraine, and Alzheimer's disease showed benefit for attention. A preliminary trial in ADHD using L-carnitine reported significant benefit.\n\n\nMETHOD\nA multi-site 16-week pilot study randomized 112 children (83 boys, 29 girls) age 5-12 with systematically diagnosed ADHD to placebo or ALC in weight-based doses from 500 to 1500 mg b.i.d. The 2001 revisions of the Conners' parent and teacher scales (including DSM-IV ADHD symptoms) were administered at baseline, 8, 12, and 16 weeks. Analyses were ANOVA of change from baseline to 16 weeks with treatment, center, and treatment-by-center interaction as independent variables.\n\n\nRESULTS\nThe primary intent-to-treat analysis, of 9 DSM-IV teacher-rated inattentive symptoms, was not significant. However, secondary analyses were interesting. There was significant (p = 0.02) moderation by subtype: superiority of ALC over placebo in the inattentive type, with an opposite tendency in combined type. There was also a geographic effect (p = 0.047). Side effects were negligible; electrocardiograms, lab work, and physical exam unremarkable.\n\n\nCONCLUSION\nALC appears safe, but with no effect on the overall ADHD population (especially combined type). It deserves further exploration for possible benefit specifically in the inattentive type.", "title": "" }, { "docid": "500202f494dc3769fdb0c7de98aec9c7", "text": "Clocked comparators have found widespread use in noise sensitive applications including analog-to-digital converters, wireline receivers, and memory bit-line detectors. However, their nonlinear, time-varying dynamics resulting in discrete output levels have discouraged the use of traditional linear time-invariant (LTI) small-signal analysis and noise simulation techniques. This paper describes a linear, time-varying (LTV) model of clock comparators that can accurately predict the decision error probability without resorting to more general stochastic system models. The LTV analysis framework in conjunction with the linear, periodically time-varying (LPTV) simulation algorithms available from RF circuit simulators can provide insights into the intrinsic sampling and decision operations of clock comparators and the major contribution sources to random decision errors. Two comparators are simulated and compared with laboratory measurements. A 90-nm CMOS comparator is measured to have an equivalent input-referred random noise of 0.73 mVrms for dc inputs, matching simulation results with a short channel excess noise factor ¿ = 2.", "title": "" }, { "docid": "6e140b1901184183c7cc4cfc10532b84", "text": "During January and February 2001, an outbreak of febrile illness associated with altered sensorium was observed in Siliguri, West Bengal, India. Laboratory investigations at the time of the outbreak did not identify an infectious agent. Because Siliguri is in close proximity to Bangladesh, where outbreaks of Nipah virus (NiV) infection were recently described, clinical material obtained during the Siliguri outbreak was retrospectively analyzed for evidence of NiV infection. NiV-specific immunoglobulin M (IgM) and IgG antibodies were detected in 9 of 18 patients. Reverse transcription-polymerase chain reaction (RT-PCR) assays detected RNA from NiV in urine samples from 5 patients. Sequence analysis confirmed that the PCR products were derived from NiV RNA and suggested that the NiV from Siliguri was more closely related to NiV isolates from Bangladesh than to NiV isolates from Malaysia. NiV infection has not been previously detected in India.", "title": "" }, { "docid": "473968c14db4b189af126936fd5486ca", "text": "Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.", "title": "" }, { "docid": "a895b7888b15e49a2140bcea9c20e0b9", "text": "Deep convolutional neural networks (DNNs) have brought significant performance improvements to face recognition. However the training can hardly be carried out on mobile devices because the training of these models requires much computational power. An individual user with the demand of deriving DNN models from her own datasets usually has to outsource the training procedure onto a cloud or edge server. However this outsourcing method violates privacy because it exposes the users’ data to curious service providers. In this paper, we utilize the differentially private mechanism to enable the privacy-preserving edge based training of DNN face recognition models. During the training, DNN is split between the user device and the edge server in a way that both private data and model parameters are protected, with only a small cost of local computations. We show that our mechanism is capable of training models in different scenarios, e.g., from scratch, or through finetuning over existed models.", "title": "" }, { "docid": "60f2baba7922543e453a3956eb503c05", "text": "Pylearn2 is a machine learning research library. This does n t just mean that it is a collection of machine learning algorithms that share a comm n API; it means that it has been designed for flexibility and extensibility in ord e to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summar y of the library’s architecture, and a description of how the Pylearn2 communi ty functions socially.", "title": "" } ]
scidocsrr
b05b88e5f94806a65b945385f16b9dc5
Directly Modeling Missing Data in Sequences with RNNs: Improved Classification of Clinical Time Series
[ { "docid": "42c890832d861ad2854fd1f56b13eb45", "text": "We apply deep learning to the problem of discovery and detection of characteristic patterns of physiology in clinical time series data. We propose two novel modifications to standard neural net training that address challenges and exploit properties that are peculiar, if not exclusive, to medical data. First, we examine a general framework for using prior knowledge to regularize parameters in the topmost layers. This framework can leverage priors of any form, ranging from formal ontologies (e.g., ICD9 codes) to data-derived similarity. Second, we describe a scalable procedure for training a collection of neural networks of different sizes but with partially shared architectures. Both of these innovations are well-suited to medical applications, where available data are not yet Internet scale and have many sparse outputs (e.g., rare diagnoses) but which have exploitable structure (e.g., temporal order and relationships between labels). However, both techniques are sufficiently general to be applied to other problems and domains. We demonstrate the empirical efficacy of both techniques on two real-world hospital data sets and show that the resulting neural nets learn interpretable and clinically relevant features.", "title": "" } ]
[ { "docid": "9eaedcf7ab75f690f42466375a9ceaa6", "text": "This paper presents a Current Mode Logic (CML) transmitter circuit that forms part of a Serializer/ Deserializer IP core used in a high speed I/O links targeted for 10+ Gbps Ethernet applications. The paper discusses the 3 tap FIR filter equalization implemented to minimize the effects of Inter Symbol interference (ISI) and attenuation of high speed signal content in the channel. The paper also discusses on the design optimization implemented using hybrid segmentation of driver segments which results in improved control on the step sizes variations, Differential Non Linearity (DNL) errors at segment boundaries over Process mismatch variations.", "title": "" }, { "docid": "597b893e42df1bfba3d17b2d3ec31539", "text": "Genetic Programming (GP) is an evolutionary algorithm that has received a lot of attention lately due to its success in solving hard real-world problems. Lately, there has been considerable interest in GP's community to develop semantic genetic operators, i.e., operators that work on the phenotype. In this contribution, we describe EvoDAG (Evolving Directed Acyclic Graph) which is a Python library that implements a steady-state semantic Genetic Programming with tournament selection using an extension of our previous crossover operators based on orthogonal projections in the phenotype space. To show the effectiveness of EvoDAG, it is compared against state-of-the-art classifiers on different benchmark problems, experimental results indicate that EvoDAG is very competitive.", "title": "" }, { "docid": "44cda3da01ebd82fe39d886f8520ce13", "text": "This paper describes some of the work on stereo that has been going on at INRIA in the last four years. The work has concentrated on obtaining dense, accurate, and reliable range maps of the environment at rates compatible with the real-time constraints of such applications as the navigation of mobile vehicles in man-made or natural environments. The class of algorithms which has been selected among several is the class of correlationbased stereo algorithms because they are the only ones that can produce su ciently dense range maps with an algorithmic structure which lends itself nicely to fast implementations because of the simplicity of the underlying computation. We describe the various improvements that we have brought to the original idea, including validation and characterization of the quality of the matches, a recursive implementation of the score computation which makes the method independent of the size of the correlation window, and a calibration method which does not require the use of a calibration pattern. We then describe two implementations of this algorithm on two very di erent pieces of hardware. The rst implementation is on a board with four Digital Signal Processors designed jointly with Matra MSII. This implementation can produce 64 64 range maps at rates varying between 200 and 400 ms, depending upon the range of disparities. The second implementation is on a board developed by DEC-PRL and can perform the cross-correlation of two 256 256 images in 140 ms. The rst implementation has been integrated in the navigation system of the INRIA cart and used to correct for inertial and odometric errors in navigation experiments both indoors and outdoors on road. This is the rst application of our correlation-based algorithm which is described in the paper. The second application has been done jointly with people from the french national space agency (CNES) to study the possibility of using stereo on a future planetary rover for the construction of Digital Elevation Maps. We have shown that real time stereo is possible today at low-cost and can be applied in real applications. The algorithm that has been described is not the most sophisticated available but we have made it robust and reliable thanks to a number of improvements. Even though each of these improvements is not earth-shattering from the pure research point of view, altogether they have allowed us to go beyond a very important threshold. This threshold measures the di erence between a program that runs in the laboratory on a few images and one that works continuously for hours on a sequence of stereo pairs and produces results at such rates and of such quality that they can be used to guide a real vehicle or to produce Discrete Elevation Maps. We believe that this threshold has only been reached in a very small number of cases.", "title": "" }, { "docid": "a218d5aac0f5d52d3828cdff05a9009b", "text": "This paper proposes a single-stage high-power-factor (HPF) LED driver with coupled inductors for street-lighting applications. The presented LED driver integrates a dual buck-boost power-factor-correction (PFC) ac-dc converter with coupled inductors and a half-bridge-type LLC dc-dc resonant converter into a single-stage-conversion circuit topology. The coupled inductors inside the dual buck-boost converter subcircuit are designed to be operated in the discontinuous-conduction mode for obtaining high power-factor (PF). The half-bridge-type LLC resonant converter is designed for achieving soft-switching on two power switches and output rectifier diodes, in order to reduce their switching losses. This paper develops and implements a cost-effective driver for powering a 144-W-rated LED street-lighting module with input utility-line voltage ranging from 100 to 120 V. The tested prototype yields satisfying experimental results, including high circuit efficiency (>89.5%), low input-current total-harmonic distortion (<; 5.5%), high PF (> 0.99), low output-voltage ripple (<; 7.5%), and low output-current ripple (<; 5%), thus demonstrating the feasibility of the proposed LED driver.", "title": "" }, { "docid": "982dae78e301aec02012d9834f000d6d", "text": "This paper investigates a universal approach of synthesizing arbitrary ternary logic circuits in quantum computation based on the truth table technology. It takes into account of the relationship of classical logic and quantum logic circuits. By adding inputs with constant value and garbage outputs, the classical non-reversible logic can be transformed into reversible logic. Combined with group theory, it provides an algorithm using the ternary Swap gate, ternary NOT gate and ternary Toffoli gate library. Simultaneously, the main result shows that the numbers of qutrits we use are minimal compared to other methods. We also illustrate with two examples to test our approach.", "title": "" }, { "docid": "9385259a7dd9ed123f61141d933ab2a4", "text": "Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.", "title": "" }, { "docid": "64cbc5ec72c81bd44e992076de5edc56", "text": "The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G : R → R. Our main theorem is that, if G is L-Lipschitz, then roughly O(k logL) random Gaussian measurements suffice for an `2/`2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.", "title": "" }, { "docid": "8f4cebc98552d3024b477c2f1576e24f", "text": "The SentiMAG Multicentre Trial evaluated a new magnetic technique for sentinel lymph node biopsy (SLNB) against the standard (radioisotope and blue dye or radioisotope alone). The magnetic technique does not use radiation and provides both a color change (brown dye) and a handheld probe for node localization. The primary end point of this trial was defined as the proportion of sentinel nodes detected with each technique (identification rate). A total of 160 women with breast cancer scheduled for SLNB, who were clinically and radiologically node negative, were recruited from seven centers in the United Kingdom and The Netherlands. SLNB was undertaken after administration of both the magnetic and standard tracers (radioisotope with or without blue dye). A total of 170 SLNB procedures were undertaken on 161 patients, and 1 patient was excluded, leaving 160 patients for further analysis. The identification rate was 95.0 % (152 of 160) with the standard technique and 94.4 % (151 of 160) with the magnetic technique (0.6 % difference; 95 % upper confidence limit 4.4 %; 6.9 % discordance). Of the 22 % (35 of 160) of patients with lymph node involvement, 16 % (25 of 160) had at least 1 macrometastasis, and 6 % (10 of 160) had at least a micrometastasis. Another 2.5 % (4 of 160) had isolated tumor cells. Of 404 lymph nodes removed, 297 (74 %) were true sentinel nodes. The lymph node retrieval rate was 2.5 nodes per patient overall, 1.9 nodes per patient with the standard technique, and 2.0 nodes per patient with the magnetic technique. The magnetic technique is a feasible technique for SLNB, with an identification rate that is not inferior to the standard technique.", "title": "" }, { "docid": "ec0da5cea716d1270b2143ffb6c610d6", "text": "This study focuses on the development of a web-based Attendance Register System or formerly known as ARS. The development of this system is motivated due to the fact that the students’ attendance records are one of the important elements that reflect their academic achievements in the higher academic institutions. However, the current practice implemented in most of the higher academic institutions in Malaysia is becoming more prone to human errors and frauds. Assisted by the System Development Life Cycle (SDLC) methodology, the ARS has been built using the web-based applications such as PHP, MySQL and Apache to cater the recording and reporting of the students’ attendances. The development of this prototype system is inspired by the feasibility study done in Universiti Teknologi MARA, Malaysia where 550 respondents have taken part in answering the questionnaires. From the analysis done, it has revealed that a more systematic and revolutionary system is indeed needed to be reinforced in order to improve the process of recording and reporting the attendances in the higher academic institution. ARS can be easily accessed by the lecturers via the Web and most importantly, the reports can be generated in realtime processing, thus, providing invaluable information about the students’ commitments in attending the classes. This paper will discuss in details the development of ARS from the feasibility study until the design phase.", "title": "" }, { "docid": "52315f23e419ba27e6fd058fe8b7aa9d", "text": "Detected obstacles overlaid on the original image Polar map: The agent is at the center of the map, facing 00. The blue points correspond to polar positions of the obstacle points around the agent. 1. Talukder, A., et al. \"Fast and reliable obstacle detection and segmentation for cross-country navigation.\" Intelligent Vehicle SympoTalukder, A., et al. \"Fast and reliable obstacle detection and segmentation for cross-country navigation.\" Intelligent Vehicle Symposium, 2002. IEEE. Vol. 2. IEEE, 2002. 2. Sun, Deqing, Stefan Roth, and Michael J. Black. \"Secrets of optical flow estimation and their principles.\" Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010. 3. Bernini, Nicola, et al. \"Real-time obstacle detection using stereo vision for autonomous ground vehicles: A survey.\" Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on. IEEE, 2014. 4. Broggi, Alberto, et al. \"Stereo obstacle detection in challenging environments: the VIAC experience.\" Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on. IEEE, 2011.", "title": "" }, { "docid": "906b785365a27e5d9c7f0a622996264b", "text": "In this paper, we put forward a new pre–processing scheme for automatic analysis of dermoscopic images. Our contribu tions are two-fold. First, we present a procedure, an extens ion of previous approaches, which succeeds in removing confoun ding factors from dermoscopic images: these include shading ind uce by imaging non-flat skin surfaces and the effect of light-int ensity falloff toward the edges of the dermoscopic image. This proc edure is shown to facilitate the detection and removal of arti f cts such as hairs as well. Second, we present a novel simple yet ef fective greyscale conversion approach that is based on phys ics and biology of human skin. Our proposed greyscale image provides high separability between a pigmented lesion and norm al skin surrounding it. Finally, using our pre–processing sch eme, we perform segmentation based on simple grey-level thresho lding, with results outperforming the state of the art.", "title": "" }, { "docid": "34fa7e6d5d4f1ab124e3f12462e92805", "text": "Natural image modeling plays a key role in many vision problems such as image denoising. Image priors are widely used to regularize the denoising process, which is an ill-posed inverse problem. One category of denoising methods exploit the priors (e.g., TV, sparsity) learned from external clean images to reconstruct the given noisy image, while another category of methods exploit the internal prior (e.g., self-similarity) to reconstruct the latent image. Though the internal prior based methods have achieved impressive denoising results, the improvement of visual quality will become very difficult with the increase of noise level. In this paper, we propose to exploit image external patch prior and internal self-similarity prior jointly, and develop an external patch prior guided internal clustering algorithm for image denoising. It is known that natural image patches form multiple subspaces. By utilizing Gaussian mixture models (GMMs) learning, image similar patches can be clustered and the subspaces can be learned. The learned GMMs from clean images are then used to guide the clustering of noisy-patches of the input noisy images, followed by a low-rank approximation process to estimate the latent subspace for image recovery. Numerical experiments show that the proposed method outperforms many state-of-the-art denoising algorithms such as BM3D and WNNM.", "title": "" }, { "docid": "532d5655281bf409dd6a44c1f875cd88", "text": "BACKGROUND\nOlder adults are at increased risk of experiencing loneliness and depression, particularly as they move into different types of care communities. Information and communication technology (ICT) usage may help older adults to maintain contact with social ties. However, prior research is not consistent about whether ICT use increases or decreases isolation and loneliness among older adults.\n\n\nOBJECTIVE\nThe purpose of this study was to examine how Internet use affects perceived social isolation and loneliness of older adults in assisted and independent living communities. We also examined the perceptions of how Internet use affects communication and social interaction.\n\n\nMETHODS\nOne wave of data from an ongoing study of ICT usage among older adults in assisted and independent living communities in Alabama was used. Regression analysis was used to determine the relationship between frequency of going online and isolation and loneliness (n=205) and perceptions of the effects of Internet use on communication and social interaction (n=60).\n\n\nRESULTS\nAfter controlling for the number of friends and family, physical/emotional social limitations, age, and study arm, a 1-point increase in the frequency of going online was associated with a 0.147-point decrease in loneliness scores (P=.005). Going online was not associated with perceived social isolation (P=.14). Among the measures of perception of the social effects of the Internet, each 1-point increase in the frequency of going online was associated with an increase in agreement that using the Internet had: (1) made it easier to reach people (b=0.508, P<.001), (2) contributed to the ability to stay in touch (b=0.516, P<.001), (3) made it easier to meet new people (b=0.297, P=.01, (4) increased the quantity of communication with others (b=0.306, P=.01), (5) made the respondent feel less isolated (b=0.491, P<.001), (6) helped the respondent feel more connected to friends and family (b=0.392, P=.001), and (7) increased the quality of communication with others (b=0.289, P=.01).\n\n\nCONCLUSIONS\nUsing the Internet may be beneficial for decreasing loneliness and increasing social contact among older adults in assisted and independent living communities.", "title": "" }, { "docid": "dfac485205134103cb66b07caa6fbaf0", "text": "Electrical responses of the single muscle fibre (SFER) by stimulation of the motor terminal nerve-endings have been investigated in normal subjects at various ages in vivo. Shape, latency, rise-time and interspike distance seem to be SFER's most interesting parameters of the functional organisation of the motor subunits and their terminal fractions. \"Time\" parameters of SFER are in agreement with the anatomo-functional characteristics of the excited tissues during ageing.", "title": "" }, { "docid": "944d467bb6da4991127b76310fec585b", "text": "One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.", "title": "" }, { "docid": "e04e1dc5cd4d0729c661375486884b14", "text": "The Internet of Things (IoT) and the Web are closely related to each other. On the one hand, the Semantic Web has been including vocabularies and semantic models for the Internet of Things. On the other hand, the so-called Web of Things (WoT) advocates architectures relying on established Web technologies and RESTful interfaces for the IoT. In this paper, we present a vocabulary for WoT that aims at defining IoT concepts using terms from the Web. Notably, it includes two concepts identified as the core WoT resources: Thing Description (TD) and Interaction, that have been first elaborated by the W3C interest group for WoT. Our proposal is built upon the ontological pattern Identifier, Resource, Entity (IRE) that was originally designed for the Semantic Web. To better analyze the alignments our proposal allows, we reviewed existing IoT models as a vocabulary graph, complying with the approach of Linked Open Vocabularies (LOV).", "title": "" }, { "docid": "274ce66c0bcc77a1e4a858bef9e41111", "text": "It is a timely issue to understand the impact of bilingualism upon brain structure in healthy aging and upon cognitive decline given evidence of its neuroprotective effects. Plastic changes induced by bilingualism were reported in young adults in the left inferior parietal lobule (LIPL) and its right counterpart (RIPL) (Mechelli et al., 2004). Moreover, both age of second language (L2) acquisition and L2 proficiency correlated with increased grey matter (GM) in the LIPL/RIPL. However it is unknown whether such findings replicate in older bilinguals. We examined this question in an aging bilingual population from Hong Kong. Results from our Voxel Based Morphometry study show that elderly bilinguals relative to a matched monolingual control group also have increased GM volumes in the inferior parietal lobules underlining the neuroprotective effect of bilingualism. However, unlike younger adults, age of L2 acquisition did not predict GM volumes. Instead, LIPL and RIPL appear differentially sensitive to the effects of L2 proficiency and L2 exposure with LIPL more sensitive to the former and RIPL more sensitive to the latter. Our data also intimate that such * Corresponding author. University Vita-Salute San Raffaele, Via Olgettina 58, 20132 Milan, Italy. Tel.: þ39 0226434888. E-mail addresses: abutalebi.jubin@hsr.it, jubin@hku.hk (J. Abutalebi).", "title": "" }, { "docid": "df158503822641430e6f17a43655cf2e", "text": "Open information extraction (OIE) is the process to extract relations and their arguments automatically from textual documents without the need to restrict the search to predefined relations. In recent years, several OIE systems for the English language have been created but there is not any system for the Vietnamese language. In this paper, we propose a method of OIE for Vietnamese using a clause-based approach. Accordingly, we exploit Vietnamese dependency parsing using grammar clauses that strives to consider all possible relations in a sentence. The corresponding clause types are identified by their propositions as extractable relations based on their grammatical functions of constituents. As a result, our system is the first OIE system named vnOIE for the Vietnamese language that can generate open relations and their arguments from Vietnamese text with highly scalable extraction while being domain independent. Experimental results show that our OIE system achieves promising results with a precision of 83.71%.", "title": "" }, { "docid": "7809fdedaf075955523b51b429638501", "text": "PM10 prediction has attracted special legislative and scientific attention due to its harmful effects on human health. Statistical techniques have the potential for high-accuracy PM10 prediction and accordingly, previous studies on statistical methods for temporal, spatial and spatio-temporal prediction of PM10 are reviewed and discussed in this paper. A review of previous studies demonstrates that Support Vector Machines, Artificial Neural Networks and hybrid techniques show promise for suitable temporal PM10 prediction. A review of the spatial predictions of PM10 shows that the LUR (Land Use Regression) approach has been successfully utilized for spatial prediction of PM10 in urban areas. Of the six introduced approaches for spatio-temporal prediction of PM10, only one approach is suitable for high-resolved prediction (Spatial resolution < 100 m; Temporal resolution ď 24 h). In this approach, based upon the LUR modeling method, short-term dynamic input variables are employed as explanatory variables alongside typical non-dynamic input variables in a non-linear modeling procedure.", "title": "" } ]
scidocsrr
617ac6cf9494e8982b2e47b5604425da
NEMO : Neuro-Evolution with Multiobjective Optimization of Deep Neural Network for Speed and Accuracy
[ { "docid": "048b124d585c523905b1a61b68fcc09e", "text": "Driver’s status is crucial because one of the main reasons for motor vehicular accidents is related to driver’s inattention or drowsiness. Drowsiness detector on a car can reduce numerous accidents. Accidents occur because of a single moment of negligence, thus driver monitoring system which works in real-time is necessary. This detector should be deployable to an embedded device and perform at high accuracy. In this paper, a novel approach towards real-time drowsiness detection based on deep learning which can be implemented on a low cost embedded board and performs with a high accuracy is proposed. Main contribution of our paper is compression of heavy baseline model to a light weight model deployable to an embedded board. Moreover, minimized network structure was designed based on facial landmark input to recognize whether driver is drowsy or not. The proposed model achieved an accuracy of 89.5% on 3-class classification and speed of 14.9 frames per second (FPS) on Jetson TK1.", "title": "" }, { "docid": "c10dd691e79d211ab02f2239198af45c", "text": "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.84, which is only 0.1 percent worse and 1.2x faster than the current state-of-the-art model. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-ofthe-art.", "title": "" } ]
[ { "docid": "41eab64d00f1a4aaea5c5899074d91ca", "text": "Informally described design patterns are useful for communicating proven solutions for recurring design problems to developers, but they cannot be used as compliance points against which solutions that claim to conform to the patterns are checked. Pattern specification languages that utilize mathematical notation provide the needed formality, but often at the expense of usability. We present a rigorous and practical technique for specifying pattern solutions expressed in the unified modeling language (UML). The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models. The technique has been used to create specifications of solutions for several popular design patterns. We illustrate the use of the technique by specifying observer and visitor pattern solutions.", "title": "" }, { "docid": "345a59aac1e89df5402197cca90ca464", "text": "Tony Velkov,* Philip E. Thompson, Roger L. Nation, and Jian Li* School of Medicine, Deakin University, Pigdons Road, Geelong 3217, Victoria, Australia, Medicinal Chemistry and Drug Action and Facility for Anti-infective Drug Development and Innovation, Drug Delivery, Disposition and Dynamics, Monash Institute of Pharmaceutical Sciences, Monash University, 381 Royal Parade, Parkville 3052, Victoria, Australia", "title": "" }, { "docid": "37b22de12284d38f6488de74f436ccc8", "text": "Entity disambiguation is an important step in many information retrieval applications. This paper proposes new research for entity disambiguation with the focus of name disambiguation in digital libraries. In particular, pairwise similarity is first learned for publications that share the same author name string (ANS) and then a novel Hierarchical Agglomerative Clustering approach with Adaptive Stopping Criterion (HACASC) is proposed to adaptively cluster a set of publications that share a same ANS to individual clusters of publications with different author identities. The HACASC approach utilizes a mixture of kernel ridge regressions to intelligently determine the threshold in clustering. This obtains more appropriate clustering granularity than non-adaptive stopping criterion. We conduct a large scale empirical study with a dataset of more than 2 million publication record pairs to demonstrate the advantage of the proposed HACASC approach.", "title": "" }, { "docid": "a35a564a2f0e16a21e0ef5e26601eab9", "text": "The social media revolution has created a dynamic shift in the digital marketing landscape. The voice of influence is moving from traditional marketers towards consumers through online social interactions. In this study, we focus on two types of online social interactions, namely, electronic word of mouth (eWOM) and observational learning (OL), and explore how they influence consumer purchase decisions. We also examine how receiver characteristics, consumer expertise and consumer involvement, moderate consumer purchase decision process. Analyzing panel data collected from a popular online beauty forum, we found that consumer purchase decisions are influenced by their online social interactions with others and that action-based OL information is more influential than opinion-based eWOM. Further, our results show that both consumer expertise and consumer involvement play an important moderating role, albeit in opposite direction: Whereas consumer expertise exerts a negative moderating effect, consumer involvement is found to have a positive moderating effect. The study makes important contributions to research and practice.", "title": "" }, { "docid": "e5a936bbd9e6dc0189b7cc18268f0f87", "text": "A new method of obtaining amplitude modulation (AM) for determining target location with spinning reticles is presented. The method is based on the use of graded transmission capabilities. The AM spinning reticles previously presented were functions of three parameters: amplitude vs angle, amplitude vs radius, and phase. This paper presents these parameters along with their capabilities and limitations and shows that multiple parameters can be integrated into a single reticle. It is also shown that AM parameters can be combined with FM parameters in a single reticle. Also, a general equation is developed that relates the AM parameters to a reticle transmission equation.", "title": "" }, { "docid": "f2478e4b1156e112f84adbc24a649d04", "text": "Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. In this context, we organized SemEval2015 Task 3 on Answer Selection in cQA, which included two subtasks: (a) classifying answers as good, bad, or potentially relevant with respect to the question, and (b) answering a YES/NO question with yes, no, or unsure, based on the list of all answers. We set subtask A for Arabic and English on two relatively different cQA domains, i.e., the Qatar Living website for English, and a Quran-related website for Arabic. We used crowdsourcing on Amazon Mechanical Turk to label a large English training dataset, which we released to the research community. Thirteen teams participated in the challenge with a total of 61 submissions: 24 primary and 37 contrastive. The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.", "title": "" }, { "docid": "0c8517bab8a8fa34f25a72cf6c971b25", "text": "Automotive radar sensors are key components for driver assistant systems. In order to handle complex traffic scenarios an advanced separability is required with respect to object angle, distance and velocity. In this contribution a highly integrated automotive radar sensor enabling chirp sequence modulation will be presented and discussed. Furthermore, the development of a target simulator which is essential for the characterization of such radar sensors will be introduced including measurements demonstrating the performance of our system.", "title": "" }, { "docid": "288f8a2dab0c32f85c313f5a145e47a5", "text": "Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm. 1 Motivation The central problem of reinforcement learning is value function approximation: how to accurately estimate the total future reward from a given state. Recent successes have used deep neural networks to approximate the value function, resulting in state-of-the-art performance in a variety of challenging domains [9]. Neural networks are most effective when the desired target function is smooth. However, value functions are, by their very nature, discontinuous functions with sharp variations over time. In this paper we introduce a representation of value that matches the natural temporal structure of value functions. A value function represents the expected sum of future discounted rewards. If non-zero rewards occur infrequently but reliably, then an accurate prediction of the cumulative discounted reward rises as such rewarding moments approach and drops immediately after. This is depicted schematically with the dashed black line in Figure 1. The true value function is quite smooth, except immediately after receiving a reward when there is a sharp drop. This is a pervasive scenario because many domains associate positive or negative reinforcements to salient events (like picking up an object, hitting a wall, or reaching a goal position). The problem is that the agent’s observations tend to be smooth in time, so learning an accurate value estimate near those sharp drops puts strain on the function approximator – especially when employing differentiable function approximators such as neural networks that naturally make smooth maps from observations to outputs. To address this problem, we incorporate the temporal structure of cumulative discounted rewards into the value function itself. The main idea is that, by default, the value function can respect the reward sequence. If no reward is observed, then the next value smoothly matches the previous value, but 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: After the same amount of training, our proposed method (red) produces much more accurate estimates of the true value function (dashed black), compared to the baseline (blue). The main plot shows discounted future returns as a function of the step in a sequence of states; the inset plot shows the RMSE when training on this data, as a function of network updates. See section 4 for details. becomes a little larger due to the discount. If a reward is observed, it should be subtracted out from the previous value: in other words a reward that was expected has now been consumed. The natural value approximator (NVA) combines the previous value with the observed rewards and discounts, which makes this sequence of values easy to represent by a smooth function approximator such as a neural network. Natural value approximators may also be helpful in partially observed environments. Consider a situation in which an agent stands on a hill top. The goal is to predict, at each step, how many steps it will take until the agent has crossed a valley to another hill top in the distance. There is fog in the valley, which means that if the agent’s state is a single observation from the valley it will not be able to accurately predict how many steps remain. In contrast, the value estimate from the initial hill top may be much better, because the observation is richer. This case is depicted schematically in Figure 2. Natural value approximators may be effective in these situations, since they represent the current value in terms of previous value estimates. 2 Problem definition We consider the typical scenario studied in reinforcement learning, in which an agent interacts with an environment at discrete time intervals: at each time step t the agent selects an action as a function of the current state, which results in a transition to the next state and a reward. The goal of the agent is to maximize the discounted sum of rewards collected in the long run from a set of initial states [12]. The interaction between the agent and the environment is modelled as a Markov Decision Process (MDP). An MDP is a tuple (S,A, R, γ, P ) where S is a state space, A is an action space, R : S×A×S → D(R) is a reward function that defines a distribution over the reals for each combination of state, action, and subsequent state, P : S × A → D(S) defines a distribution over subsequent states for each state and action, and γt ∈ [0, 1] is a scalar, possibly time-dependent, discount factor. One common goal is to make accurate predictions under a behaviour policy π : S → D(A) of the value vπ(s) ≡ E [R1 + γ1R2 + γ1γ2R3 + . . . | S0 = s] . (1) The expectation is over the random variables At ∼ π(St), St+1 ∼ P (St, At), and Rt+1 ∼ R(St, At, St+1), ∀t ∈ N. For instance, the agent can repeatedly use these predictions to improve its policy. The values satisfy the recursive Bellman equation [2] vπ(s) = E [Rt+1 + γt+1vπ(St+1) | St = s] . We consider the common setting where the MDP is not known, and so the predictions must be learned from samples. The predictions made by an approximate value function v(s;θ), where θ are parameters that are learned. The approximation of the true value function can be formed by temporal 2 difference (TD) learning [10], where the estimate at time t is updated towards Z t ≡ Rt+1 + γt+1v(St+1;θ) or Z t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)v(St+n;θ) ,(2) where Z t is the n-step bootstrap target, and the TD-error is δ n t ≡ Z t − v(St;θ). 3 Proposed solution: Natural value approximators The conventional approach to value function approximation produces a value estimate from features associated with the current state. In states where the value approximation is poor, it can be better to rely more on a combination of the observed sequence of rewards and older but more reliable value estimates that are projected forward in time. Combining these estimates can potentially be more accurate than using one alone. These ideas lead to an algorithm that produces three estimates of the value at time t. The first estimate, Vt ≡ v(St;θ), is a conventional value function estimate at time t. The second estimate, Gpt ≡ Gβt−1 −Rt γt if γt > 0 and t > 0 , (3) is a projected value estimate computed from the previous value estimate, the observed reward, and the observed discount for time t. The third estimate, Gβt ≡ βtG p t + (1− βt)Vt = (1− βt)Vt + βt Gβt−1 −Rt γt , (4) is a convex combination of the first two estimates1 formed by a time-dependent blending coefficient βt. This coefficient is a learned function of state β(·;θ) : S → [0, 1], over the same parameters θ, and we denote βt ≡ β(St;θ). We call Gβt the natural value estimate at time t and we call the overall approach natural value approximators (NVA). Ideally, the natural value estimate will become more accurate than either of its constituents from training. The value is learned by minimizing the sum of two losses. The first loss captures the difference between the conventional value estimate Vt and the target Zt, weighted by how much it is used in the natural value estimate, JV ≡ E [ [[1− βt]]([[Zt]]− Vt) ] , (5) where we introduce the stop-gradient identity function [[x]] = x that is defined to have a zero gradient everywhere, that is, gradients are not back-propagated through this function. The second loss captures the difference between the natural value estimate and the target, but it provides gradients only through the coefficient βt, Jβ ≡ E [ ([[Zt]]− (βt [[Gpt ]] + (1− βt)[[Vt]])) ] . (6) These two losses are summed into a joint loss, J = JV + cβJβ , (7) where cβ is a scalar trade-off parameter. When conventional stochastic gradient descent is applied to minimize this loss, the parameters of Vt are adapted with the first loss and parameters of βt are adapted with the second loss. When bootstrapping on future values, the most accurate value estimate is best, so using Gβt instead of Vt leads to refined prediction targets Z t ≡ Rt+1 + γt+1G β t+1 or Z β,n t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)G β t+n . (8) 4 Illustrative Examples We now provide some examples of situations where natural value approximations are useful. In both examples, the value function is difficult to estimate well uniformly in all states we might care about, and the accuracy can be improved by using the natural value estimate Gβt instead of the direct value estimate Vt. Note the mixed recursion in the definition, G depends on G , and vice-versa. 3 Sparse rewards Figure 1 shows an example of value function approximation. To separate concerns, this is a supervised learning setup (regression) with the true value targets provided (dashed black line). Each point 0 ≤ t ≤ 100 on the horizontal axis corresponds to one state St in a single sequence. The shape of the target values stems from a handful of reward events, and discounting with γ = 0.9. We mimic observations that smoothly vary across time by 4 equally spaced radial basis functions, so St ∈ R. The approximators v(s) and β(s) are two small neural networks with one hidden layer of 32 ReLU units each, and a single linear or sigmoid output unit, respectively. The input", "title": "" }, { "docid": "837a68575b84782a252f8bd49ad654a0", "text": "We explore contemporary, data-driven techniques for solving math word problems over recent large-scale datasets. We show that well-tuned neural equation classifiers can outperform more sophisticated models such as sequence to sequence and self-attention across these datasets. Our error analysis indicates that, while fully data driven models show some promise, semantic and world knowledge is necessary for further advances.", "title": "" }, { "docid": "f4d6cd6f6cd453077e162b64ae485c62", "text": "Effects of Music Therapy on Prosocial Behavior of Students with Autism and Developmental Disabilities by Catherine L. de Mers Dr. Matt Tincani, Examination Committee Chair Assistant Professor o f Special Education University o f Nevada, Las Vegas This researeh study employed a multiple baseline across participants design to investigate the effects o f music therapy intervention on hitting, screaming, and asking o f three children with autism and/or developmental disabilities. Behaviors were observed and recorded during 10-minute free-play sessions both during baseline and immediately after musie therapy sessions during intervention. Interobserver agreement and procedural fidelity data were collected. Music therapy sessions were modeled on literature pertaining to music therapy with children with autism. In addition, social validity surveys were eollected to answer research questions pertaining to the social validity of music therapy as an intervention. Findings indicate that music therapy produced moderate and gradual effects on hitting, screaming, and asking. Hitting and sereaming deereased following intervention, while asking increased. Intervention effects were maintained three weeks following", "title": "" }, { "docid": "5956e9399cfe817aa1ddec5553883bef", "text": "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.", "title": "" }, { "docid": "09b77e632fb0e5dfd7702905e51fc706", "text": "Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with its unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.", "title": "" }, { "docid": "93a9df00671b032986148106d7e90f70", "text": "Vulnerabilities in applications and their widespread exploitation through successful attacks are common these days. Testing applications for preventing vulnerabilities is an important step to address this issue. In recent years, a number of security testing approaches have been proposed. However, there is no comparative study of these work that might help security practitioners select an appropriate approach for their needs. Moreover, there is no comparison with respect to automation capabilities of these approaches. In this work, we identify seven criteria to analyze program security testing work. These are vulnerability coverage, source of test cases, test generation method, level of testing, granularity of test cases, testing automation, and target applications. We compare and contrast prominent security testing approaches available in the literature based on these criteria. In particular, we focus on work that address four most common but dangerous vulnerabilities namely buffer overflow, SQL injection, format string bug, and cross site scripting. Moreover, we investigate automation features available in these work across a security testing process. We believe that our findings will provide practical information for security practitioners in choosing the most appropriate tools.", "title": "" }, { "docid": "952735cb937248c837e0b0244cd9dbb1", "text": "Recently, the desired very high throughput of 5G wireless networks drives millimeter-wave (mm-wave) communication into practical applications. A phased array technique is required to increase the effective antenna aperture at mm-wave frequency. Integrated solutions of beamforming/beam steering are extremely attractive for practical implementations. After a discussion on the basic principles of radio beam steering, we review and explore the recent advanced integration techniques of silicon-based electronic integrated circuits (EICs), photonic integrated circuits (PICs), and antenna-on-chip (AoC). For EIC, the latest advanced designs of on-chip true time delay (TTD) are explored. Even with such advances, the fundamental loss of a silicon-based EIC still exists, which can be solved by advanced PIC solutions with ultra-broad bandwidth and low loss. Advanced PIC designs for mm-wave beam steering are then reviewed with emphasis on an optical TTD. Different from the mature silicon-based EIC, the photonic integration technology for PIC is still under development. In this paper, we review and explore the potential photonic integration platforms and discuss how a monolithic integration based on photonic membranes fits the photonic mm-wave beam steering application, especially for the ease of EIC and PIC integration on a single chip. To combine EIC, for its accurate and mature fabrication techniques, with PIC, for its ultra-broad bandwidth and low loss, a hierarchical mm-wave beam steering chip with large-array delays realized in PIC and sub-array delays realized in EIC can be a future-proof solution. Moreover, the antenna units can be further integrated on such a chip using AoC techniques. Among the mentioned techniques, the integration trends on device and system levels are discussed extensively.", "title": "" }, { "docid": "b15ed1584eb030fba1ab3c882983dbf0", "text": "The need for automated grading tools for essay writing and open-ended assignments has received increasing attention due to the unprecedented scale of Massive Online Courses (MOOCs) and the fact that more and more students are relying on computers to complete and submit their school work. In this paper, we propose an efficient memory networks-powered automated grading model. The idea of our model stems from the philosophy that with enough graded samples for each score in the rubric, such samples can be used to grade future work that is found to be similar. For each possible score in the rubric, a student response graded with the same score is collected. These selected responses represent the grading criteria specified in the rubric and are stored in the memory component. Our model learns to predict a score for an ungraded response by computing the relevance between the ungraded response and each selected response in memory. The evaluation was conducted on the Kaggle Automated Student Assessment Prize (ASAP) dataset. The results show that our model achieves state-of-the-art performance in 7 out of 8 essay sets.", "title": "" }, { "docid": "44f257275a36308ce088881fafc92d7c", "text": "Frauds related to the ATM (Automatic Teller Machine) are increasing day by day which is a serious issue. ATM security is used to provide protection against these frauds. Though security is provided for ATM machine, cases of robberies are increasing. Previous technologies provide security within machines for secure transaction, but machine is not neatly protected. The ATM machines are not safe since security provided traditionally were either by using RFID reader or by using security guard outside the ATM. This security is not sufficient because RFID card can be stolen and can be misused for robbery as well as watchman can be blackmailed by the thief. So there is a need to propose new technology which can overcome this problem. This paper proposes a system which aims to design real-time monitoring and controlling system. The system is implemented using Raspberry Pi and fingerprint module which make the system more secure, cost effective and stand alone. For controlling purpose, Embedded Web Server (EWS) is designed using Raspberry Pi which serves web page on which video footage of ATM center is seen and controlled. So the proposed system removes the drawback of manual controlling camera module and door also this system is stand alone and cost effective.", "title": "" }, { "docid": "8920b9fbfe010af17e664c0b62c8e0a2", "text": "The field of machine learning is an interesting and relatively new area of research in artificial intelligence. In this paper, a special type of reinforcement learning, Q-Learning, was applied to the popular mobile game Flappy Bird. The QLearning algorithm was tested on two different environments. The original version and a simplified version. The maximum score achieved on the original version and simplified version were 169 and 28,851, respectively. The trade-off between runtime and accuracy was investigated. Using appropriate settings, the Q-Learning algorithm was proven to be successful with a relatively quick convergence time.", "title": "" }, { "docid": "1dc07b02a70821fdbaa9911755d1e4b0", "text": "The AROMA project is exploring the kind of awareness that people effortless are able to maintain about other beings who are located physically close. We are designing technology that attempts to mediate a similar kind of awareness among people who are geographically dispersed but want to stay better in touch. AROMA technology can be thought of as a stand-alone communication device or -more likely -an augmentation of existing technologies such as the telephone or full-blown media spaces. Our approach differs from other recent designs for awareness (a) by choosing pure abstract representations on the display site, (b) by possibly remapping the signal across media between capture and display, and, finally, (c) by explicitly extending the application domain to include more than the working life, to embrace social interaction in general. We are building a series of prototypes to learn if abstract representation of activity data does indeed convey a sense of remote presence and does so in a sutTiciently subdued manner to allow the user to concentrate on his or her main activity. We have done some initial testing of the technical feasibility of our designs. What still remains is an extensive effort of designing a symbolic language of remote presence, done in parallel with studies of how people will connect and communicate through such a language as they live with the AROMA system.", "title": "" }, { "docid": "8d61cbb3df2ea134fa1252d5eff29597", "text": "Recovering 3D full-body human pose is a challenging problem with many applications. It has been successfully addressed by motion capture systems with body worn markers and multiple cameras. In this paper, we address the more challenging case of not only using a single camera but also not leveraging markers: going directly from 2D appearance to 3D geometry. Deep learning approaches have shown remarkable abilities to discriminatively learn 2D appearance features. The missing piece is how to integrate 2D, 3D, and temporal information to recover 3D geometry and account for the uncertainties arising from the discriminative model. We introduce a novel approach that treats 2D joint locations as latent variables whose uncertainty distributions are given by a deep fully convolutional neural network. The unknown 3D poses are modeled by a sparse representation and the 3D parameter estimates are realized via an Expectation-Maximization algorithm, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Extensive evaluation on benchmark datasets shows that the proposed approach achieves greater accuracy over state-of-the-art baselines. Notably, the proposed approach does not require synchronized 2D-3D data for training and is applicable to “in-the-wild” images, which is demonstrated with the MPII dataset.", "title": "" }, { "docid": "ba1368e4acc52395a8e9c5d479d4fe8f", "text": "This talk will present an overview of our recent research on distributional reinforcement learning. Our starting point is our recent ICML paper, in which we argued for the fundamental importance of the value distribution: the distribution of random returns received by a reinforcement learning agent. This is in contrast to the common approach, which models the expectation of this return, or value. Back then, we were able to design a new algorithm that learns the value distribution through a TD-like bootstrap process and achieved state-of-the-art performance on games from the Arcade Learning Environment (ALE). However, this left open the question as to why the distributional approach should perform better at all. We’ve since delved deeper into what makes distributional RL work: first by improving the original using quantile regression, which directly minimizes the Wasserstein metric; and second by unearthing surprising connections between the original C51 algorithm and the distant cousin of the Wasserstein metric, the Cramer distance.", "title": "" } ]
scidocsrr
3407fdd1aa3121aa6f110be5c6930c9e
A VNF-as-a-service design through micro-services disassembling the IMS
[ { "docid": "bf239cb017be0b2137b0b4fd1f1d4247", "text": "Network function virtualization was recently proposed to improve the flexibility of network service provisioning and reduce the time to market of new services. By leveraging virtualization technologies and commercial off-the-shelf programmable hardware, such as general-purpose servers, storage, and switches, NFV decouples the software implementation of network functions from the underlying hardware. As an emerging technology, NFV brings several challenges to network operators, such as the guarantee of network performance for virtual appliances, their dynamic instantiation and migration, and their efficient placement. In this article, we provide a brief overview of NFV, explain its requirements and architectural framework, present several use cases, and discuss the challenges and future directions in this burgeoning research area.", "title": "" }, { "docid": "75a637281cb0ed9c307bc900e2a0da66", "text": "Cloud computing provides new opportunities to deploy scalable application in an efficient way, allowing enterprise applications to dynamically adjust their computing resources on demand. In this paper we analyze and test the microservice architecture pattern, used during the last years by large Internet companies like Amazon, Netflix and LinkedIn to deploy large applications in the cloud as a set of small services that can be developed, tested, deployed, scaled, operated and upgraded independently, allowing these companies to gain agility, reduce complexity and scale their applications in the cloud in a more efficient way. We present a case study where an enterprise application was developed and deployed in the cloud using a monolithic approach and a microservice architecture using the Play web framework. We show the results of performance tests executed on both applications, and we describe the benefits and challenges that existing enterprises can get and face when they implement microservices in their applications.", "title": "" } ]
[ { "docid": "ed28d1b8142a2149a1650e861deb7c53", "text": "Over the last few years, the use of virtualization technologies has increased dramatically. This makes the demand for efficient and secure virtualization solutions become more obvious. Container-based virtualization and hypervisor-based virtualization are two main types of virtualization technologies that have emerged to the market. Of these two classes, container-based virtualization is able to provide a more lightweight and efficient virtual environment, but not without security concerns. In this paper, we analyze the security level of Docker, a well-known representative of container-based approaches. The analysis considers two areas: (1) the internal security of Docker, and (2) how Docker interacts with the security features of the Linux kernel, such as SELinux and AppArmor, in order to harden the host system. Furthermore, the paper also discusses and identifies what could be done when using Docker to increase its level of security.", "title": "" }, { "docid": "7f5815a918c6d04783d68dbc041cc6a0", "text": "This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a large-margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and text-to-image retrieval. Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.", "title": "" }, { "docid": "fbde8c336fe5d707d247faa51bb8c76c", "text": "The paper approaches the problem of imageto-text with attention-based encoder-decoder networks that are trained to handle sequences of characters rather than words. We experiment on lines of text from a popular handwriting database with different attention mechanisms for the decoder. The model trained with softmax attention achieves the lowest test error, outperforming several other RNN-based models. Our results show that softmax attention is able to learn a linear alignment whereas the alignment generated by sigmoid attention is linear but much less precise.", "title": "" }, { "docid": "b47d411ca9a59331b79931c1b1e984f6", "text": "A novel miniature wideband rectangular patch antenna is designed for wireless local area network (WLANs) applications and operating for 5-6 GHz ISM band, and wideband applications. The proposed antenna gives a bandwidth of 4.84 to 6.56 GHz for S11<-10dB. The antenna has the dimensions of 20 mm by 15 mm by 0.8 mm on FR4 substrate. Rectangular slot and step have been used for bandwidth improvement.", "title": "" }, { "docid": "ba3315636b720625e7b285b26d8d371a", "text": "Sharing of physical infrastructure using virtualization presents an opportunity to improve the overall resource utilization. It is extremely important for a Software as a Service (SaaS) provider to understand the characteristics of the business application workload in order to size and place the virtual machine (VM) containing the application. A typical business application has a multi-tier architecture and the application workload is often predictable. Using the knowledge of the application architecture and statistical analysis of the workload, one can obtain an appropriate capacity and a good placement strategy for the corresponding VM. In this paper we propose a tool iCirrus-WoP that determines VM capacity and VM collocation possibilities for a given set of application workloads. We perform an empirical analysis of the approach on a set of business application workloads obtained from geographically distributed data centers. The iCirrus-WoP tool determines the fixed reserved capacity and a shared capacity of a VM which it can share with another collocated VM. Based on the workload variation, the tool determines if the VM should be statically allocated or needs a dynamic placement. To determine the collocation possibility, iCirrus-WoP performs a peak utilization analysis of the workloads. The empirical analysis reveals the possibility of collocating applications running in different time-zones. The VM capacity that the tool recommends, show a possibility of improving the overall utilization of the infrastructure by more than 70% if they are appropriately collocated.", "title": "" }, { "docid": "f4b270b09649ba05dd22d681a2e3e3b7", "text": "Advanced analytical techniques are gaining popularity in addressing complex classification type decision problems in many fields including healthcare and medicine. In this exemplary study, using digitized signal data, we developed predictive models employing three machine learning methods to diagnose an asthma patient based solely on the sounds acquired from the chest of the patient in a clinical laboratory. Although, the performances varied slightly, ensemble models (i.e., Random Forest and AdaBoost combined with Random Forest) achieved about 90% accuracy on predicting asthma patients, compared to artificial neural networks models that achieved about 80% predictive accuracy. Our results show that noninvasive, computerized lung sound analysis that rely on low-cost microphones and an embedded real-time microprocessor system would help physicians to make faster and better diagnostic decisions, especially in situations where x-ray and CT-scans are not reachable or not available. This study is a testament to the improving capabilities of analytic techniques in support of better decision making, especially in situations constraint by limited resources.", "title": "" }, { "docid": "91b6b9e22f191cfec87d7b62d809542c", "text": "In the past few years, the storage and analysis of large-scale and fast evolving networks present a great challenge. Therefore, a number of different techniques have been proposed for sampling large networks. In general, network exploration techniques approximate the original networks more accurately than random node and link selection. Yet, link selection with additional subgraph induction step outperforms most other techniques. In this paper, we apply subgraph induction also to random walk and forest-fire sampling. We analyze different real-world networks and the changes of their properties introduced by sampling. We compare several sampling techniques based on the match between the original networks and their sampled variants. The results reveal that the techniques with subgraph induction underestimate the degree and clustering distribution, while overestimate average degree and density of the original networks. Techniques without subgraph induction step exhibit exactly the opposite behavior. Hence, the performance of the sampling techniques from random selection category compared to network exploration sampling does not differ significantly, while clear differences exist between the techniques with subgraph induction step and the ones without it.", "title": "" }, { "docid": "6bdb8048915000b2d6c062e0e71b8417", "text": "Depressive disorders are the most typical disease affecting many different factors of humanity. University students may be at increased risk of depression owing to the pressure and stress they encounter. Therefore, the purpose of this study is comparing the level of depression among male and female athletes and non-athletes undergraduate student of private university in Esfahan, Iran. The participants in this research are composed of 400 male and female athletes as well as no-athletes Iranian undergraduate students. The Beck depression test (BDI) was employed to measure the degree of depression. T-test was used to evaluate the distinction between athletes and non-athletes at P≤0.05. The ANOVA was conducted to examine whether there was a relationship between level of depression among non-athletes and athletes. The result showed that the prevalence rate of depression among non-athlete male undergraduate students is significantly higher than that of athlete male students. The results also presented that level of depression among female students is much more frequent compared to males. This can be due to the fatigue and lack of energy that are more frequent among female in comparison to the male students. Physical activity was negatively related to the level of depression by severity among male and female undergraduate students. However, there is no distinct relationship between physical activity and level of depression according to the age of athlete and nonathlete male and female undergraduate students. This study has essential implications for clinical psychology due to the relationship between physical activity and prevalence of depression.", "title": "" }, { "docid": "186141651bfb780865712deb8c407c54", "text": "Sample and statistically based singing synthesizers typically require a large amount of data for automatically generating expressive synthetic performances. In this paper we present a singing synthesizer that using two rather small databases is able to generate expressive synthesis from an input consisting of notes and lyrics. The system is based on unit selection and uses the Wide-Band Harmonic Sinusoidal Model for transforming samples. The first database focuses on expression and consists of less than 2 minutes of free expressive singing using solely vowels. The second one is the timbre database which for the English case consists of roughly 35 minutes of monotonic singing of a set of sentences, one syllable per beat. The synthesis is divided in two steps. First, an expressive vowel singing performance of the target song is generated using the expression database. Next, this performance is used as input control of the synthesis using the timbre database and the target lyrics. A selection of synthetic performances have been submitted to the Interspeech Singing Synthesis Challenge 2016, in which they are compared to other competing systems.", "title": "" }, { "docid": "70c8caf1bdbdaf29072903e20c432854", "text": "We show that the topological modular functor from Witten–Chern–Simons theory is universal for quantum computation in the sense that a quantum circuit computation can be efficiently approximated by an intertwining action of a braid on the functor’s state space. A computational model based on Chern–Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation has topological implications which will be considered elsewhere.", "title": "" }, { "docid": "3b7dcbefbbc20ca1a37fa318c2347b4c", "text": "To better understand how individual differences influence the use of information technoiogy (IT), this study models and tests relationships among dynamic, IT-specific individual differences (i.e.. computer self-efficacy and computer anxiety). stable, situation-specific traits (i.e., personal innovativeness in IT) and stable, broad traits (i.e.. ''Cynthia Beath was the accepting senior editor for this paper. trait anxiety and negative affectivity). When compared to broad traits, the model suggests that situation-specific traits exert a more pervasive influence on IT situation-specific individual differences. Further, the modei suggests that computer anxiety mediates the influence of situationspecific traits (i.e., personal innovativeness) on computer self-efficacy. Results provide support for many of the hypothesized relationships. From a theoretical perspective, the findings help to further our understanding of the nomological network among individual differences that lead to computer self-efficacy. From a practical perspective, the findings may help IT managers design training programs that more effectiveiy increase the computer self-efficacy of users with different dispositional characteristics.", "title": "" }, { "docid": "ef6160d304908ea87287f2071dea5f6d", "text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.", "title": "" }, { "docid": "d8eafd22765903ea3b2e4f0bf0f1ad9d", "text": "Interest in \"green nanotechnology\" in nanoparticle biosynthesis is growing among researchers. Nanotechnologies, due to their physicochemical and biological properties, have applications in diverse fields, including drug delivery, sensors, optoelectronics, and magnetic devices. This review focuses on the green synthesis of silver nanoparticles (AgNPs) using plant sources. Green synthesis of nanoparticles is an eco-friendly approach, which should be further explored for the potential of different plants to synthesize nanoparticles. The sizes of AgNPs are in the range of 1 to 100 nm. Characterization of synthesized nanoparticles is accomplished through UV spectroscopy, X-ray diffraction, Fourier transform infrared spectroscopy, transmission electron microscopy, and scanning electron microscopy. AgNPs have great potential to act as antimicrobial agents. The green synthesis of AgNPs can be efficiently applied for future engineering and medical concerns. Different types of cancers can be treated and/or controlled by phytonanotechnology. The present review provides a comprehensive survey of plant-mediated synthesis of AgNPs with specific focus on their applications, e.g., antimicrobial, antioxidant, and anticancer activities.", "title": "" }, { "docid": "5618f1415cace8bb8c4773a7e44a4e3f", "text": "Methods of evaluating and comparing the performance of diagnostic tests are of increasing importance as new tests are developed and marketed. When a test is based on an observed variable that lies on a continuous or graded scale, an assessment of the overall value of the test can be made through the use of a receiver operating characteristic (ROC) curve. The curve is constructed by varying the cutpoint used to determine which values of the observed variable will be considered abnormal and then plotting the resulting sensitivities against the corresponding false positive rates. When two or more empirical curves are constructed based on tests performed on the same individuals, statistical analysis on differences between curves must take into account the correlated nature of the data. This paper presents a nonparametric approach to the analysis of areas under correlated ROC curves, by using the theory on generalized U-statistics to generate an estimated covariance matrix.", "title": "" }, { "docid": "6c58cfbdbb424f1e2ad35339e7ee7aa6", "text": "We present a theoretical model of a multi-input arrayed waveguide grating (AWG) based on Fourier optics and apply the model to the design of a flattened passband response. This modeling makes it possible to systematically analyze spectral performance and to clarify the physical mechanisms of the multi-input AWG. The model suggested that the width of an input/output mode-field function and the number of waveguides in the array are important factors to flatten the response. We also developed a model for a novel AWG employing cascaded Mach-Zehnder interferometers connected to the AWG input ports and numerically analyzed its optical performance to achieve low-loss, low-crosstalk, and flat-passband response. We demonstrated the usability of this model through investigations of filter performance. We also compared the filter spectrum given by this model with that given by simulation using the beam propagation method", "title": "" }, { "docid": "ee1bbcdd8f332de297b6ea243da51b43", "text": "Automatic image annotation has been an active research topic due to its great importance in image retrieval and management. However, results of the state-of-the-art image annotation methods are often unsatisfactory. Despite continuous efforts in inventing new annotation algorithms, it would be advantageous to develop a dedicated approach that could refine imprecise annotations. In this paper, a novel approach to automatically refining the original annotations of images is proposed. For a query image, an existing image annotation method is first employed to obtain a set of candidate annotations. Then, the candidate annotations are re-ranked and only the top ones are reserved as the final annotations. By formulating the annotation refinement process as a Markov process and defining the candidate annotations as the states of a Markov chain, a content-based image annotation refinement (CIAR) algorithm is proposed to re-rank the candidate annotations. It leverages both corpus information and the content feature of a query image. Experimental results on a typical Corel dataset show not only the validity of the refinement, but also the superiority of the proposed algorithm over existing ones.", "title": "" }, { "docid": "48653a8de0dd6e881415855e694fc925", "text": "The aim of this study was to compare the use of transcutaneous vs. motor nerve stimulation in the evaluation of low-frequency fatigue. Nine female and eleven male subjects, all physically active, performed a 30-min downhill run on a motorized treadmill. Knee extensor muscle contractile characteristics were measured before, immediately after (Post), and 30 min after the fatiguing exercise (Post30) by using single twitches and 0.5-s tetani at 20 Hz (P20) and 80 Hz (P80). The P20-to-P80 ratio was calculated. Electrical stimulations were randomly applied either maximally to the femoral nerve or via large surface electrodes (ES) at an intensity sufficient to evoke 50% of maximal voluntary contraction (MVC) during a 80-Hz tetanus. Voluntary activation level was also determined during isometric MVC by the twitch-interpolation technique. Knee extensor MVC and voluntary activation level decreased at all points in time postexercise (P < 0.001). P20 and P80 displayed significant time x gender x stimulation method interactions (P < 0.05 and P < 0.001, respectively). Both stimulation methods detected significant torque reductions at Post and Post30. Overall, ES tended to detect a greater impairment at Post in male and a lesser one in female subjects at both Post and Post30. Interestingly, the P20-P80 ratio relative decrease did not differ between the two methods of stimulation. The low-to-high frequency ratio only demonstrated a significant time effect (P < 0.001). It can be concluded that low-frequency fatigue due to eccentric exercise appears to be accurately assessable by ES.", "title": "" }, { "docid": "02a276b26400fe37804298601b16bc13", "text": "Over the years, different meanings have been associated with the word consistency in the distributed systems community. While in the ’80s “consistency” typically meant strong consistency, later defined also as linearizability, in recent years, with the advent of highly available and scalable systems, the notion of “consistency” has been at the same time both weakened and blurred.\n In this article, we aim to fill the void in the literature by providing a structured and comprehensive overview of different consistency notions that appeared in distributed systems, and in particular storage systems research, in the last four decades. We overview more than 50 different consistency notions, ranging from linearizability to eventual and weak consistency, defining precisely many of these, in particular where the previous definitions were ambiguous. We further provide a partial order among different consistency predicates, ordering them by their semantic “strength,” which we believe will be useful in future research. Finally, we map the consistency semantics to different practical systems and research prototypes.\n The scope of this article is restricted to non-transactional semantics, that is, those that apply to single storage object operations. As such, our article complements the existing surveys done in the context of transactional, database consistency semantics.", "title": "" }, { "docid": "3f9faa5f62cfca0492797c50810ce7e1", "text": "3D-GAN (Wu et al. in: Advances in Neural Information Processing Systems, pp. 82–90, 2016) has been introduced as a novel way to generate 3D models. In this paper, we propose a 3D-Masked-CGAN approach to apply in the generation of irregular 3D mesh geometry such as rocks. While there are many ways to generate 3D objects, the generation of irregular 3D models has its own peculiarity. To make a model realistic is extremely time-consuming and in high cost. In order to control the shape of generated 3D models, we extend 3D-GAN by adding conditional information into both the generator and discriminator. It is shown that that this model can generate 3D rock models with effective control over the shapes of generated models.", "title": "" }, { "docid": "1b5a800affc14f3693004d021677357d", "text": "Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.", "title": "" } ]
scidocsrr
40a2e8b8e002341a446e3c46eb9b21d8
Modelling OWL Ontologies with Graffoo
[ { "docid": "6549a00df9fadd56b611ee9210102fe8", "text": "Ontology editors are software tools that allow the creation and maintenance of ontologies through a graphical user interface. As the Semantic Web effort grows, a larger community of users for this kind of tools is expected. New users include people not specifically skilled in the use of ontology formalisms. In consequence, the usability of ontology editors can be viewed as a key adoption precondition for Semantic Web technologies. In this paper, the usability evaluation of several representative ontology editors is described. This evaluation is carried out by combining a heuristic pre-assessment and a subsequent user-testing phase. The target population comprises people with no specific ontology-creation skills that have a general knowledge about domain modelling. The problems found point out that, for this kind of users, current editors are adequate for the creation and maintenance of simple ontologies, but also that there is room for improvement, especially in browsing mechanisms, help systems and visualization metaphors.", "title": "" }, { "docid": "9d330ac4c902c80b19b5f578e3bd9125", "text": "Since its introduction in 1986, the 10-item System Usability Scale (SUS) has been assumed to be unidimensional. Factor analysis of two independent SUS data sets reveals that the SUS actually has two factors – Usability (8 items) and Learnability (2 items). These new scales have reasonable reliability (coefficient alpha of .91 and .70, respectively). They correlate highly with the overall SUS (r = .985 and .784, respectively) and correlate significantly with one another (r = .664), but at a low enough level to use as separate scales. A sensitivity analysis using data from 19 tests had a significant Test by Scale interaction, providing additional evidence of the differential utility of the new scales. Practitioners can continue to use the current SUS as is, but, at no extra cost, can also take advantage of these new scales to extract additional information from their SUS data.", "title": "" } ]
[ { "docid": "a4e1a0f5e56685a294a2c9088809a4fb", "text": "As multicore systems continue to gain ground in the High Performance Computing world, linear algebra algorithms have to be reformulated or new algorithms have to be developed in order to take advantage of the architectural features on these new processors. Fine grain parallelism becomes a major requirement and introduces the necessity of loose synchronization in the parallel execution of an operation. This paper presents an algorithm for the Cholesky, LU and QR factorization where the operations can be represented as a sequence of small tasks that operate on square blocks of data. These tasks can be dynamically scheduled for execution based on the dependencies among them and on the availability of computational resources. This may result in an out of order execution of the tasks which will completely hide the presence of intrinsically sequential tasks in the factorization. Performance comparisons are presented with the LAPACK algorithms where parallelism can only be exploited at the level of the BLAS operations and vendor implementations.", "title": "" }, { "docid": "4ab971e837286b95ebbdd1f99c6749c0", "text": "In this paper we demonstrate results of a technique for synchronizing clocks and estimating ranges between a pair of RF transceivers. The technique uses a periodic exchange of ranging waveforms between two transceivers along with sophisticated delay estimation and tracking. The technique was implemented on wireless testbed transceivers with independent clocks and tested over-the-air in stationary and moving configurations. The technique achieved ~10ps synchronization accuracy and 2.1mm range deviation, using A two-channel oscilloscope and tape measure as truth sources. The timing resolution attained is three orders of magnitude better than the inverse signal bandwidth of the ranging waveform (50MHz⇒ 6m resolution), and is within a small fraction of the carrier wavelength (915MHz⇒ 327mm wavelength). We discuss how this result is consistent with the Weiss-Weinstein bound and cite new applications enabled by this technique.", "title": "" }, { "docid": "aacaadc8175f1c42338d0e72c0234686", "text": "For successful physical human-robot interaction, the capability of a robot to understand its environment is imperative. More importantly, the robot should extract from the human operator as much information as possible. A reliable 3D skeleton extraction is essential for a robot to predict the intentions of the operator while s/he moves toward the robot or performs a meaningful gesture. For this purpose, we have integrated a time-of-flight depth camera with a state-of-the-art 2D skeleton extraction library namely Openpose, to obtain 3D skeletal joint coordinates reliably. We have also developed a robust and rotation invariant (in the coronal plane)hand gesture detector using a convolutional neural network. At run time (after having been trained)the detector does not require any pre-processing of the hand images. A complete pipeline for skeleton extraction and hand gesture recognition is developed and employed for real-time physical human-robot interaction, demonstrating the promising capability of the designed framework. This work establishes a firm basis and will be extended for the development of intelligent human intention detection in physical human-robot interaction scenarios, to efficiently recognize a variety of static as well as dynamic gestures.", "title": "" }, { "docid": "1feaf48291b7ea83d173b70c23a3b7c0", "text": "Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).", "title": "" }, { "docid": "5f21a1348ad836ded2fd3d3264455139", "text": "To date, brain imaging has largely relied on X-ray computed tomography and magnetic resonance angiography with limited spatial resolution and long scanning times. Fluorescence-based brain imaging in the visible and traditional near-infrared regions (400-900 nm) is an alternative but currently requires craniotomy, cranial windows and skull thinning techniques, and the penetration depth is limited to 1-2 mm due to light scattering. Here, we report through-scalp and through-skull fluorescence imaging of mouse cerebral vasculature without craniotomy utilizing the intrinsic photoluminescence of single-walled carbon nanotubes in the 1.3-1.4 micrometre near-infrared window. Reduced photon scattering in this spectral region allows fluorescence imaging reaching a depth of >2 mm in mouse brain with sub-10 micrometre resolution. An imaging rate of ~5.3 frames/s allows for dynamic recording of blood perfusion in the cerebral vessels with sufficient temporal resolution, providing real-time assessment of blood flow anomaly in a mouse middle cerebral artery occlusion stroke model.", "title": "" }, { "docid": "ff947ccb7efdd5517f9b60f9c11ade6a", "text": "Several messages express opinions about events, products, and services, political views or even their author's emotional state and mood. Sentiment analysis has been used in several applications including analysis of the repercussions of events in social networks, analysis of opinions about products and services, and simply to better understand aspects of social communication in Online Social Networks (OSNs). There are multiple methods for measuring sentiments, including lexical-based approaches and supervised machine learning methods. Despite the wide use and popularity of some methods, it is unclear which method is better for identifying the polarity (i.e., positive or negative) of a message as the current literature does not provide a method of comparison among existing methods. Such a comparison is crucial for understanding the potential limitations, advantages, and disadvantages of popular methods in analyzing the content of OSNs messages. Our study aims at filling this gap by presenting comparisons of eight popular sentiment analysis methods in terms of coverage (i.e., the fraction of messages whose sentiment is identified) and agreement (i.e., the fraction of identified sentiments that are in tune with ground truth). We develop a new method that combines existing approaches, providing the best coverage results and competitive agreement. We also present a free Web service called iFeel, which provides an open API for accessing and comparing results across different sentiment methods for a given text.", "title": "" }, { "docid": "5320ff5b9e2a3d0d206bb74ed0e047cd", "text": "To the Editor: How do Shai et al. (July 17 issue)1 explain why the subjects in their study regained weight between month 6 and month 24, despite a reported reduction of 300 to 600 calories per day? Contributing possibilities may include the notion that a food-frequency questionnaire cannot precisely determine energy or macronutrient intake but, rather, ascertains general dietary patterns. Certain populations may underreport intake2,3 and have a decreased metabolic rate. The authors did not measure body composition, which is critical for documenting weight-loss components. In addition, the titles of the diets that are described in the article are misleading. Labeling the “low-carbohydrate” diet as such is questionable, since 40 to 42% of calories were from carbohydrates from month 6 to month 24, and data regarding ketosis support this view. Participants in the low-fat and Mediterranean-diet groups consumed between 30% and 33% of calories from fat and did not increase fiber consumption, highlighting the importance of diet quality. Furthermore, the authors should have provided baseline values and P values for within-group changes from baseline (see Table 2 of the article). Contrary to the authors’ assertion, it is not surprising that the effects on many biomarkers were minimal, since the dietary changes were minimal. The absence of biologically significant weight loss (2 to 4% after 2 years) highlights the fact that energy restriction and weight loss in themselves may minimally affect metabolic outcomes and that lifestyle changes must incorporate physical activity to optimize the reduction in the risk of chronic disease.4,5 Christian K. Roberts, Ph.D. R. James Barnard, Ph.D. Daniel M. Croymans, B.S.", "title": "" }, { "docid": "2653554c6dec7e9cfa0f5a4080d251e2", "text": "Clustering is a key technique within the KDD process, with k-means, and the more general k-medoids, being well-known incremental partition-based clustering algorithms. A fundamental issue within this class of algorithms is to find an initial set of medians (or medoids) that improves the efficiency of the algorithms (e.g., accelerating its convergence to a solution), at the same time that it improves its effectiveness (e.g., finding more meaningful clusters). Thus, in this article we aim at providing a technique that, given a set of elements, quickly finds a very small number of elements as medoid candidates for this set, allowing to improve both the efficiency and effectiveness of existing clustering algorithms. We target the class of k-medoids algorithms in general, and propose a technique that selects a well-positioned subset of central elements to serve as the initial set of medoids for the clustering process. Our technique leads to a substantially smaller amount of distance calculations, thus improving the algorithm’s efficiency when compared to existing methods, without sacrificing effectiveness. A salient feature of our proposed technique is that it is not a new k-medoid clustering algorithm per se, rather, it can be used in conjunction with any existing clustering algorithm that is based on the k-medoid paradigm. Experimental results, using both synthetic and real datasets, confirm the efficiency, effectiveness and scalability of the proposed technique.", "title": "" }, { "docid": "6bf002e1a3f544ebf599940ef22c1911", "text": "In this paper, we present a new approach for fingerprint class ification based on Discrete Fourier Transform (DFT) and nonlinear discrimina nt nalysis. Utilizing the Discrete Fourier Transform and directional filters, a relia ble and efficient directional image is constructed from each fingerprint image, and then no nlinear discriminant analysis is applied to the constructed directional images, reducing the dimension dramatically and extracting the discriminant features. The pr oposed method explores the capability of DFT and directional filtering in dealing with l ow quality images and the effectiveness of nonlinear feature extraction method in fin gerprint classification. Experimental results demonstrates competitive performance compared with other published results.", "title": "" }, { "docid": "044756096a67edd1681d00afbdd7d40e", "text": "We report in this paper two types of broadband transitions between microstrip and coplanar lines on thin benzocyclobutene (BCB) polymer substrate. They are both via-free, using electromagnetic coupling between the bottom and top ground planes, which simplifies the manufacturing of components driven by microstrip electrodes. In the first ones, the bottom ground is not patterned, which makes them particularly suitable to on-wafer measurement of components under development with coplanar probes. An ultra-broad bandwidth of 68 GHz (from 1 GHz to 69 GHz) was achieved with 20-pm BCB. In the second ones, intended for connectorizing components on thin substrate with coplanar connectors, the bottom ground is patterned to match the narrow center conductor (54 μm) on thin substrate to the wide center conductor (127 μm) of the connector with a tapered section, achieving to a experimental bandwidth 13 GHz for the moment.", "title": "" }, { "docid": "539294c5fbe3fa7e96524f5260dbb7a1", "text": "Demonstrations of mm-Wave arrays with >50 elements in silicon has led to an interest in large-scale mm-Wave MIMO arrays for 5G networks, which promise substantial improvements in network capacity [1,2]. Practical considerations result in such arrays being developed with a tiled approach, where N unit cells with M elements each are tiled to achieve large MIMO/phased arrays with NM elements [2]. Achieving stringent phase-noise specifications and scalable LO distribution to maintain phase coherence across different unit cell ICs/PCBs are a critical challenge. In this paper, we demonstrate a scalable, single-wire-synchronization architecture and circuits for mm-Wave arrays that preserve the simplicity of daisy-chained LO distribution, compensate for phase offset due to interconnects, and provide phase-noise improvement with increasing number of PLLs [3]. Measurements on a scalable 28GHz prototype demonstrate a 21% improvement in rms jitter and a 3.4dB improvement in phase noise at 10MHz offset when coupling 28GHz PLLs across three different ICs.", "title": "" }, { "docid": "304b4cee4006e87fc4172a3e9de88ed1", "text": "Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs—a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DIFFPOOL yields an average improvement of 5–10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.", "title": "" }, { "docid": "c29efdd4ef9607a92c4239c08710b089", "text": "Network coding over time varying channels has been investigated and a new scheme is proposed. We propose a novel model for packet transmission over time variant channels that exploits the channel delay profile and the dependency between channel states via first order auto-regression for Ka-band satellite communications. We provide an approximation of the delay induced assuming finite number of time slots to transmit a given number of packets. We also propose a novel adaptive transmission scheme that compensates for the lost degrees of freedom by tracking the packet erasures over time. Our results show that network coding non-adaptive mechanism for time variant channels has around 2 times throughput and delay performance gains for small size packets over network coding mechanisms with fixed channel erasures and similar performance gains for large size packets. In addition, it is shown that network coding non-adaptive mechanism for time variant channels has similar performance to the Selective Repeat (SR) with ARQ, and better performance when packet error probability is high, while due to better utilization of channel resources SR performance is similar or moderately better at very low erasures, i.e., at high SNR. However, our adaptive transmission scheme outperforms the network coding non-adaptive mechanism and SR with more than 7 times in throughput and delay performance gains.", "title": "" }, { "docid": "e4ca7c16acd9b71a5ae7f1ee29101782", "text": "Recently, distributed generators and sensitive loads have been widely used. They enable a solid-state circuit breaker (SSCB), which is an imperative device to get acceptable power quality of ac power grid systems. The existing ac SSCB composed of a silicon-controlled rectifier requires some auxiliary mechanical devices to achieve the reclosing operation before fault recovery. However, the new ac SSCB can achieve a quick breaking operation and then be reclosed with no auxiliary mechanical devices or complex control even under sustained short-circuit fault because the commutation capacitors are charged naturally without any complex control of main thyristors and auxiliary ones. The performance features of the proposed ac SSCB are verified through the experiment results of the short-circuit faults.", "title": "" }, { "docid": "df4b4119653789266134cf0b7571e332", "text": "Automatic detection of lymphocyte in H&E images is a necessary first step in lots of tissue image analysis algorithms. An accurate and robust automated lymphocyte detection approach is of great importance in both computer science and clinical studies. Most of the existing approaches for lymphocyte detection are based on traditional image processing algorithms and/or classic machine learning methods. In the recent years, deep learning techniques have fundamentally transformed the way that a computer interprets images and have become a matchless solution in various pattern recognition problems. In this work, we design a new deep neural network model which extends the fully convolutional network by combining the ideas in several recent techniques, such as shortcut links. Also, we design a new training scheme taking the prior knowledge about lymphocytes into consideration. The training scheme not only efficiently exploits the limited amount of free-form annotations from pathologists, but also naturally supports efficient fine-tuning. As a consequence, our model has the potential of self-improvement by leveraging the errors collected during real applications. Our experiments show that our deep neural network model achieves good performance in the images of different staining conditions or different types of tissues.", "title": "" }, { "docid": "da64b7855ec158e97d48b31e36f100a5", "text": "Named Entity Recognition (NER) is the task of classifying or labelling atomic elements in the text into categories such as Person, Location or Organisation. For Arabic language, recognizing named entities is a challenging task because of the complexity and the unique characteristics of this language. In addition, most of the previous work focuses on Modern Standard Arabic (MSA), however, recognizing named entities in social media is becoming more interesting these days. Dialectal Arabic (DA) and MSA are both used in social media, which is deemed as another challenging task. Most state-of-the-art Arabic NER systems count heavily on hand-crafted engineering features and lexicons which is time consuming. In this paper, we introduce a novel neural network architecture which benefits both from characterand word-level representations automatically, by using combination of bidirectional Long Short-Term Memory (LSTM) and Conditional Random Field (CRF), eliminating the need for most feature engineering. Moreover, our model relies on unsupervised word representations learned from unannotated corpora. Experimental results demonstrate that our model achieves state-of-the-art performance on publicly available benchmark for Arabic NER for social media and surpassing the previous system by a large margin.", "title": "" }, { "docid": "a0d3ebfb9a3f3c27ee2d23a74dba1f50", "text": "Machine Learning (ML) has been successful in automating a range of cognitive tasks that humans solve effortlessly and quickly. Yet many realworld tasks are difficult and slow : people solve them by an extended process that involves analytical reasoning, gathering external information, and discussing with collaborators. Examples include medical advice, judging a criminal trial, and providing personalized recommendations for rich content such as books or academic papers. There is great demand for automating tasks that require deliberative judgment. Current ML approaches can be unreliable: this is partly because such tasks are intrinsically difficult (even AI-complete) and partly because assembling datasets of deliberative judgments is expensive (each label might take hours of human work). We consider addressing this data problem by collecting fast judgments and using them to help predict deliberative (slow) judgments. Instead of having a human spend hours on a task, we might instead collect their judgment after 30 seconds or 10 minutes. These fast judgments are combined with a smaller quantity of slow judgments to provide training data. The resulting prediction problem is related to semi-supervised learning and collaborative filtering. We designed two tasks for the purpose of testing ML algorithms on predicting human deliberative judgments. One task involves Fermi estimation (back-of-the-envelope estimation) and the other involves judging the veracity of political statements. We collected a dataset of 25,000 judgments from more than 800 people. We define an ML prediction task for predicting deliberative judgments given a training set that also contains fast judgments. We tested a variety of baseline algorithms on this task. Unfortunately our dataset has serious limitations. Additional work is required to create a good testbed for predicting human deliberative judgments. This technical report explains the motivation for our project (which might be built on in future work) and explains how further work can avoid our mistakes. Our dataset and code is available at https: //github.com/oughtinc/psj. ∗University of Oxford †Ought Inc.", "title": "" }, { "docid": "84e8986eff7cb95808de8df9ac286e37", "text": "The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between histograms to capture cross-bin relationships. We also propose a new algorithm for trimming videos — to remove all the unimportant frames from videos. Our two methods both outperform other published methods and help narrow down the gap between human performance and algorithms on this task. The code has been made publicly available in the MLOSS repository.", "title": "" }, { "docid": "81d4f23c5b6d407e306569f4e3ad4be9", "text": "While much progress has been made in wearable computing in recent years, input techniques remain a key challenge. In this paper, we introduce uTrack, a technique to convert the thumb and fingers into a 3D input system using magnetic field (MF) sensing. A user wears a pair of magnetometers on the back of their fingers and a permanent magnet affixed to the back of the thumb. By moving the thumb across the fingers, we obtain a continuous input stream that can be used for 3D pointing. Specifically, our novel algorithm calculates the magnet's 3D position and tilt angle directly from the sensor readings. We evaluated uTrack as an input device, showing an average tracking accuracy of 4.84 mm in 3D space - sufficient for subtle interaction. We also demonstrate a real-time prototype and example applications allowing users to interact with the computer using 3D finger input.", "title": "" }, { "docid": "50c639dfa7063d77cda26666eabeb969", "text": "This paper addresses the problem of detecting people in two dimensional range scans. Previous approaches have mostly used pre-defined features for the detection and tracking of people. We propose an approach that utilizes a supervised learning technique to create a classifier that facilitates the detection of people. In particular, our approach applies AdaBoost to train a strong classifier from simple features of groups of neighboring beams corresponding to legs in range data. Experimental results carried out with laser range data illustrate the robustness of our approach even in cluttered office environments", "title": "" } ]
scidocsrr
4d0be0f60aeede58fb7877a91f0affe5
ScaleNet: Scale Invariant Network for Semantic Segmentation in Urban Driving Scenes
[ { "docid": "4d2be7aac363b77c6abd083947bc28c7", "text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.", "title": "" }, { "docid": "7eec1e737523dc3b78de135fc71b058f", "text": "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches", "title": "" } ]
[ { "docid": "d5b4018422fdee8d3f4f33343f3de290", "text": "Given a pair of handwritten documents written by different individuals, compute a document similarity score irrespective of (i) handwritten styles, (ii) word forms, word ordering and word overflow. • IIIT-HWS: Introducing a large scale synthetic corpus of handwritten word images for enabling deep architectures. • HWNet: A deep CNN architecture for state of the art handwritten word spotting in multi-writer scenarios. • MODS: Measure of document similarity score irrespective of word forms, ordering and paraphrasing of the content. • Applications in Educational Scenario: Comparing handwritten assignments, searching through instructional videos. 2. Contributions 3. Challenges", "title": "" }, { "docid": "b40b81e25501b08a07c64f68c851f4a6", "text": "Variable reluctance (VR) resolver is widely used in traction motor for battery electric vehicle as well as hybrid electric vehicle as a rotor position sensor. VR resolver generates absolute position signal by using resolver-to-digital converter (RDC) in order to deliver exact position of permanent magnets in a rotor of traction motor to motor controller. This paper deals with fault diagnosis of VR resolver by using co-simulation analysis with RDC for position angle detection. As fault conditions, eccentricity of VR resolver, short circuit condition of excitation coil and output signal coils, and material problem of silicon steel in a view point of permeability are considered. 2D FEM is used for the output signal waveforms of SIN, COS and these waveforms are converted into absolute position angle by using the algorithm of RDC. For the verification of proposed analysis results, experiment on fault conditions was conducted and compared with simulation ones.", "title": "" }, { "docid": "8bbbaab2cf7825ca98937de14908e655", "text": "Software Reliability Model is categorized into two, one is static model and the other one is dynamic model. Dynamic models observe the temporary behavior of debugging process during testing phase. In Static Models, modeling and analysis of program logic is done on the same code. A Model which describes about error detection in software Reliability is called Software Reliability Growth Model. This paper reviews various existing software reliability models and there failure intensity function and the mean value function. On the basis of this review a model is proposed for the software reliability having different mean value function and failure intensity function.", "title": "" }, { "docid": "263488a376e419cbbd6cd7c4ecc70a4f", "text": "This paper discusses the ethical issues related to hemicorporectomy surgery, a radical procedure that removes the lower half of the body in order to prolong life. The literature on hemicorporectomy (HC), also called translumbar amputation, has been nearly silent on the ethical considerations relevant to this rare procedure. We explore five aspects of the complex landscape of hemicorporectomy to illustrate the broader ethical questions related to this extraordinary procedure: benefits, risks, informed consent, resource allocation and justice, and loss and the lived body.", "title": "" }, { "docid": "f66ce6fb4675091de36b3e64e4fb52a5", "text": "Phishing has been a major problem for information systems managers and users for several years now. In 2008, it was estimated that phishing resulted in close to $50 billion in damages to U.S. consumers and businesses. Even so, research has yet to explore many of the reasons why Internet users continue to be exploited. The goal of this paper is to better understand the behavioral factors that may increase one’s susceptibility for complying with a phisher’s request for personal information. Using past research on deception detection, a research model was developed to help explain compliant phishing responses. The model was tested using a field study in which each participant received a phishing e-mail asking for sensitive information. It was found that four behavioral factors were influential as to whether the phishing e-mails were answered with sensitive information. The paper concludes by suggesting that the behavioral aspect of susceptible users be integrated into the current tools and materials used in antiphishing efforts. Key WoRds and phRases: computer-mediated deception, electronic mail fraud, Internet security, interpersonal deception theory, phishing. The inTeRneT has opened up a WealTh of oppoRTuniTies for individuals and businesses to expand the reach and range of their personal and commercial transactions, but these 274 WRIghT aND MaRETT openings have also created a venue for a number of computer security issues that must be addressed. Investments in security hardware and software are now fundamental parts of a company’s information technology (IT) budget. also, security policies are continually developed and refined to reduce technical vulnerabilities. however, the frequent use of Internet technologies by corporations can also introduce new vulnerabilities. One recent phenomenon that exploits end users’ carelessness is phishing. Phishing uses obfuscation of both e-mails and Web sites to trick Web users into complying with a request for personal information [5, 27]. The deceitful people behind the scam, the “phishers,” are then able to use the personal information for a number of illicit activities, ranging from individual identity theft to the theft of a company’s intellectual property. according to some estimates, phishing results in close to $50 billion of damage to U.S. consumers and businesses a year [49, 71]. In 2007, phishing attacks increased and some 3 million adults lost over $3 billion in the 12 months ending in august 2007 [29]. although some reports indicate that the annual financial damage is not rising dramatically from year to year, the number of reported victims is increasing at a significant rate [35]. Phishing continues to be a very real problem for Web users in all walks of life. Consistent with the “fishing” homonym, phishing attacks are often described by using a “bait-and-hook” metaphor [70]. The “bait” consists of a mass e-mail submission sent to a large number of random and unsuspecting recipients. The message strongly mimics the look and feel of a legitimate business, including the use of familiar logos and slogans. The e-mail often requests the recipient’s aid in correcting a technical problem with his or her user account, ostensibly by confirming or “resupplying” a user ID, a password, a credit card number, or other personal information. The message typically encourages recipients to visit a bogus Web site (the “hook”) that is similar in appearance to an actual corporate Web site, except that user-supplied information is not sent to the legitimate company’s Web server, but to a server of the phisher’s choosing. The phishing effort is relatively low in terms of cost and risk for the phishers. Further, phishers may reside in international locations that place them out of reach of authorities in the victim’s jurisdiction, making prosecution much more difficult [33]. Phishers are rarely apprehended and prosecuted for the fraud they commit. Developing methods for detecting phishing before any damage is inflicted is a priority, and several approaches for detection have resulted from the effort. Technical countermeasures, such as e-mail filtering and antiphishing toolbars, successfully detect phishing attempts in about 35 percent of cases [84]. Updating fraud definitions, flagging bogus Web sites, and preventing false alarms from occurring continues to challenge individual users and IT departments alike. an automated comparison of the design, layout, and style characteristics between authentic and fraudulent Web sites has been shown to be more promising than a simple visual inspection made by a visitor, but an up-to-date registry of valid and invalid Web sites must be available for such a method to be practical [55]. Because of ineffective technological methods of prevention, much of the responsibility for detecting phishing lies with the end user, and an effective strategy for guarding against phishing should include both technological and human detectors. however, prior research has shown that, like technology, people ThE INFlUENCES OF ExPERIENTIal aND DISPOSITIONal FaCTORS IN PhIShINg 275 are also limited in terms of detecting the fraud once they are coerced into visiting a bogus Web site [19]. Once the message recipient chooses to visit a fraudulent Web site, he or she is unlikely to detect the fraudulent nature of the request and the “hook” will have been set. In order to prevent users from sending sensitive information to phishers, educating and training e-mail users about fraud prevention and detection at the “bait” stage must be considered the first line of defense [53]. The goal of this paper is to better understand, given the large number of phishing attempts and the vast amount of attention given to phishing in the popular press, why users of online applications such as e-mail and instant messaging still fall prey to these fraudulent efforts.", "title": "" }, { "docid": "2a262a72133922a9232e9a3808341359", "text": "Autonomous driving has harsh requirements of small model size and energy efficiency, in order to enable the embedded system to achieve real-time on-board object detection. Recent deep convolutional neural network based object detectors have achieved state-of-the-art accuracy. However, such models are trained with numerous parameters and their high computational costs and large storage prohibit the deployment to memory and computation resource limited systems. Lowprecision neural networks are popular techniques for reducing the computation requirements and memory footprint. Among them, binary weight neural network (BWN) is the extreme case which quantizes the float-point into just 1 bit. BWNs are difficult to train and suffer from accuracy deprecation due to the extreme low-bit representation. To address this problem, we propose a knowledge transfer (KT) method to aid the training of BWN using a full-precision teacher network. We built DarkNetand MobileNet-based binary weight YOLO-v2 detectors and conduct experiments on KITTI benchmark for car, pedestrian and cyclist detection. The experimental results show that the proposed method maintains high detection accuracy while reducing the model size of DarkNet-YOLO from 257 MB to 8.8 MB and MobileNet-YOLO from 193 MB to 7.9 MB.", "title": "" }, { "docid": "80c4198d97b42988aa2fccaa97667bcc", "text": "Although the principles of gossip protocols are relatively easy to grasp, their variety can make their design and evaluation highly time consuming. This problem is compounded by the lack of a unified programming framework for gossip, which means developers cannot easily reuse, compose, or adapt existing solutions to fit their needs, and have limited opportunities to share knowledge and ideas. In this paper, we consider how component frameworks, which have been widely applied to implement middleware solutions, can facilitate the development of gossip-based systems in a way that is both generic and simple. We show how such an approach can maximize code reuse, simplify the implementation of gossip protocols, and facilitate dynamic evolution and redeployment.Also known as “epidemic” protocols.", "title": "" }, { "docid": "a4c312bfe90cecb0b999d6b1c8548fd8", "text": "Wireless Mesh Networks (WMNs) introduce a new paradigm ‎of wireless broadband Internet access by providing high data ‎rate service, scalability, and self-healing abilities at reduced ‎cost. ‎Obtaining high throughput for multi-cast applications (e.g. ‎video streaming broadcast) in WMNs is challenging due to the ‎interference and the change of channel quality. To overcome ‎this issue, cross-layer has been proposed to improve the ‎performance of WMNs. ‎Network coding is a powerful coding technique that has been ‎proven to be the very effective in achieving the maximum multi-cast ‎throughput. In addition to achieving the multi-cast ‎throughput, network coding offers other benefits such as load ‎balancing and saves bandwidth consumption. This ‎paper presents a review the fundamental concept types of medium access control ‎(MAC) layer, routing protocols, cross-layer and network ‎coding for wireless mesh networks. Finally, a list of directions for further research is considered. ", "title": "" }, { "docid": "03f913234dc6d41aada7ce3fe8de1203", "text": "Epicanthoplasty is commonly performed on Asian eyelids. Consequently, overcorrection may appear. The aim of this study was to introduce a method of reconstructing the epicanthal fold and to apply this method to the patients. A V flap with an extension (eagle beak shaped) was designed on the medial canthal area. The upper incision line started near the medial end of the double-fold line, and it followed its curvature inferomedially. For the lower incision, starting at the tip (medial end) of the flap, a curvilinear incision was designed first diagonally and then horizontally along the lower blepharoplasty line. The V flap was elevated as thin as possible. Then, the upper flap was deeply undermined to make it thick. The lower flap was made a little thinner than the upper flap. Then, the upper and lower flaps were approximated to form the anteromedial surface of the epicanthal fold in a fashion sufficient to cover the red caruncle. The V flap was rotated inferolaterally over the caruncle. The tip of the V flap was sutured to the medial one-third point of the lower margin. The inferior border of the V flap and the residual lower margin were approximated. Thereafter, the posterolateral surface of the epicanthal fold was made. From 1999 to 2011, 246 patients were operated on using this method. Among them, 62 patients were followed up. The mean intercanthal distance was increased from 31.7 to 33.8 mm postoperatively. Among the 246 patients operated on, reoperation was performed for 6 patients. Among the 6 patients reoperated on, 3 cases were due to epicanthus inversus, 1 case was due to insufficient reconstruction, 1 case was due to making an infold, and 1 case was due to reopening the epicanthal fold.This V-Y and rotation flap can be a useful method for reconstruction of the epicanthal fold.", "title": "" }, { "docid": "83bec63fb2932aec5840a9323cc290b4", "text": "This paper extends fully-convolutional neural networks (FCN) for the clothing parsing problem. Clothing parsing requires higher-level knowledge on clothing semantics and contextual cues to disambiguate fine-grained categories. We extend FCN architecture with a side-branch network which we refer outfit encoder to predict a consistent set of clothing labels to encourage combinatorial preference, and with conditional random field (CRF) to explicitly consider coherent label assignment to the given image. The empirical results using Fashionista and CFPD datasets show that our model achieves state-of-the-art performance in clothing parsing, without additional supervision during training. We also study the qualitative influence of annotation on the current clothing parsing benchmarks, with our Web-based tool for multi-scale pixel-wise annotation and manual refinement effort to the Fashionista dataset. Finally, we show that the image representation of the outfit encoder is useful for dress-up image retrieval application.", "title": "" }, { "docid": "0f00e029fa2ae5223dca2049680b4d16", "text": "Many classifications of attacks have been tendered, often in taxonomic form, A common basis of these taxonomies is that they have been framed from the perspective of an attacker - they organize attacks with respect to the attacker's goals, such as privilege elevation from user to root (from the well known Lincoln taxonomy). Taxonomies based on attacker goals are attack-centric; those based on defender goals are defense-centric. Defenders need a way of determining whether or not their detectors will detect a given attack. It is suggested that a defense-centric taxonomy would suit this role more effectively than an attack-centric taxonomy. This paper presents a new, defense-centric attack taxonomy, based on the way that attacks manifest as anomalies in monitored sensor data. Unique manifestations, drawn from 25 attacks, were used to organize the taxonomy, which was validated through exposure to an intrusion-detection system, confirming attack detect ability. The taxonomy's predictive utility was compared against that of a well-known extant attack-centric taxonomy. The defense-centric taxonomy is shown to be a more effective predictor of a detector's ability to detect specific attacks, hence informing a defender that a given detector is competent against an entire class of attacks.", "title": "" }, { "docid": "2ae53bfe80e74c27ea9ed5e5efadfbe7", "text": "The use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling. The key problem is how to learn a fused representation from multiple features for appearance modeling. Different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling. Different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features, this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns. Moreover, we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple features are more representative. Experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker.", "title": "" }, { "docid": "7ca8483e91485d29b58f0f98194c13a3", "text": "Managing Network Function (NF) service chains requires careful system resource management. We propose NFVnice, a user space NF scheduling and service chain management framework to provide fair, efficient and dynamic resource scheduling capabilities on Network Function Virtualization (NFV) platforms. The NFVnice framework monitors load on a service chain at high frequency (1000Hz) and employs backpressure to shed load early in the service chain, thereby preventing wasted work. Borrowing concepts such as rate proportional scheduling from hardware packet schedulers, CPU shares are computed by accounting for heterogeneous packet processing costs of NFs, I/O, and traffic arrival characteristics. By leveraging cgroups, a user space process scheduling abstraction exposed by the operating system, NFVnice is capable of controlling when network functions should be scheduled. NFVnice improves NF performance by complementing the capabilities of the OS scheduler but without requiring changes to the OS's scheduling mechanisms. Our controlled experiments show that NFVnice provides the appropriate rate-cost proportional fair share of CPU to NFs and significantly improves NF performance (throughput and loss) by reducing wasted work across an NF chain, compared to using the default OS scheduler. NFVnice achieves this even for heterogeneous NFs with vastly different computational costs and for heterogeneous workloads.", "title": "" }, { "docid": "7330b8af3f4b78c5965b2e847586d837", "text": "Bipolar disorder is characterized by recurrent manic and depressive episodes. Patients suffering from this disorder experience dramatic mood swings with a wide variety of typical behavioral facets, affecting overall activity, energy, sexual behavior, sense of self, self-esteem, circadian rhythm, cognition, and increased risk for suicide. Effective treatment options are limited and diagnosis can be complicated. To overcome these obstacles, a better understanding of the neurobiology underlying bipolar disorder is needed. Animal models can be useful tools in understanding brain mechanisms associated with certain behavior. The following review discusses several pathological aspects of humans suffering from bipolar disorder and compares these findings with insights obtained from several animal models mimicking diverse facets of its symptomatology. Various sections of the review concentrate on specific topics that are relevant in human patients, namely circadian rhythms, neurotransmitters, focusing on the dopaminergic system, stressful environment, and the immune system. We then explain how these areas have been manipulated to create animal models for the disorder. Even though several approaches have been conducted, there is still a lack of adequate animal models for bipolar disorder. Specifically, most animal models mimic only mania or depression and only a few include the cyclical nature of the human condition. Future studies could therefore focus on modeling both episodes in the same animal model to also have the possibility to investigate the switch from mania-like behavior to depressive-like behavior and vice versa. The use of viral tools and a focus on circadian rhythms and the immune system might make the creation of such animal models possible.", "title": "" }, { "docid": "39d3f1a5d40325bdc4bca9ee50241c9e", "text": "This paper reviews the recent progress of quantum-dot semiconductor optical amplifiers developed as ultrawideband polarization-insensitive high-power amplifiers, high-speed signal regenerators, and wideband wavelength converters. A semiconductor optical amplifier having a gain of > 25 dB, noise figure of < 5 dB, and 3-dB saturation output power of > 20 dBm, over the record widest bandwidth of 90 nm among all kinds of optical amplifiers, and also having a penalty-free output power of 23 dBm, the record highest among all the semiconductor optical amplifiers, was realized by using quantum dots. By utilizing isotropically shaped quantum dots, the TM gain, which is absent in the standard Stranski-Krastanow QDs, has been drastically enhanced, and nearly polarization-insensitive SOAs have been realized for the first time. With an ultrafast gain response unique to quantum dots, an optical regenerator having receiver-sensitivity improving capability of 4 dB at a BER of 10-9 and operating speed of > 40 Gb/s has been successfully realized with an SOA chip. This performance achieved together with simplicity of structure suggests a potential for low-cost realization of regenerative transmission systems.", "title": "" }, { "docid": "7b35fd3b03da392ecdd997be16ed9040", "text": "Sampling based planners have become increasingly efficient in solving the problems of classical motion planning and its applications. In particular, techniques based on the rapidly-exploring random trees (RRTs) have generated highly successful single-query planners. Recently, a variant of this planner called dynamic-domain RRT was introduced by Yershova et al. (2005). It relies on a new sampling scheme that improves the performance of the RRT approach on many motion planning problems. One of the drawbacks of this method is that it introduces a new parameter that requires careful tuning. In this paper we analyze the influence of this parameter and propose a new variant of the dynamic-domain RRT, which iteratively adapts the sampling domain for the Voronoi region of each node during the search process. This allows automatic tuning of the parameter and significantly increases the robustness of the algorithm. The resulting variant of the algorithm has been tested on several path planning problems.", "title": "" }, { "docid": "bdda2d3eef1a5040d626419c10f18d36", "text": "This paper presents a novel hybrid permanent magnet and wound field synchronous machine geometry with a displaced reluctance axis. This concept is known for improving motor operation performance and efficiency at the cost of an inferior generator operation. To overcome this disadvantage, the proposed machine geometry is capable of inverting the magnetic asymmetry dynamically. Thereby, the positive effects of the magnetic asymmetry can be used in any operation point. This paper examines the theoretical background and shows the benefits of this geometry by means of simulation and measurement. The prototype achieves an increase in torque of 4 % and an increase in efficiency of 2 percentage points over a conventional electrically excited synchronous machine.", "title": "" }, { "docid": "78f03adf9c114a8a720c9518b1cbf59e", "text": "A crucial capability of autonomous road vehicles is the ability to cope with the unknown future behavior of surrounding traffic participants. This requires using non-deterministic models for prediction. While stochastic models are useful for long-term planning, we use set-valued non-determinism capturing all possible behaviors in order to verify the safety of planned maneuvers. To reduce the set of solutions, our earlier work considers traffic rules; however, it neglects mutual influences between traffic participants. This work presents the first solution for establishing interaction within set-based prediction of traffic participants. Instead of explicitly modeling dependencies between vehicles, we trim reachable occupancy regions to consider interaction, which is computationally much more efficient. The usefulness of our approach is demonstrated by experiments from the CommonRoad benchmark repository.", "title": "" }, { "docid": "a6139086a926464cab971e811b564956", "text": "A text corpus typically contains two types of context information -- global context and local context. Global context carries topical information which can be utilized by topic models to discover topic structures from the text corpus, while local context can train word embeddings to capture semantic regularities reflected in the text corpus. This encourages us to exploit the useful information in both the global and the local context information. In this paper, we propose a unified language model based on matrix factorization techniques which 1) takes the complementary global and local context information into consideration simultaneously, and 2) models topics and learns word embeddings collaboratively. We empirically show that by incorporating both global and local context, this collaborative model can not only significantly improve the performance of topic discovery over the baseline topic models, but also learn better word embeddings than the baseline word embedding models. We also provide qualitative analysis that explains how the cooperation of global and local context information can result in better topic structures and word embeddings.", "title": "" }, { "docid": "22ecb164fb7a8bf4968dd7f5e018c736", "text": "Unsupervised learning techniques in computer vision of ten require learning latent representations, such as low-dimensional linear and non-linear subspaces. Noise and outliers in the data can frustrate these approaches by obscuring the latent spaces. Our main goal is deeper understanding and new development of robust approaches for representation learning. We provide a new interpretation for existing robust approaches and present two specific contributions: a new robust PCA approach, which can separate foreground features from dynamic background, and a novel robust spectral clustering method, that can cluster facial images with high accuracy. Both contributions show superior performance to standard methods on real-world test sets.", "title": "" } ]
scidocsrr
66d83be656b37c668d9d6753c6ac8bff
Cloud-based Wireless Network: Virtualized, Reconfigurable, Smart Wireless Network to Enable 5G Technologies
[ { "docid": "0cbd3587fe466a13847e94e29bb11524", "text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?", "title": "" }, { "docid": "4412bca4e9165545e4179d261828c85c", "text": "Today 3G mobile systems are on the ground providing IP connectivity for real-time and non-real-time services. On the other side, there are many wireless technologies that have proven to be important, with the most important ones being 802.11 Wireless Local Area Networks (WLAN) and 802.16 Wireless Metropolitan Area Networks (WMAN), as well as ad-hoc Wireless Personal Area Network (WPAN) and wireless networks for digital TV and radio broadcast. Then, the concepts of 4G is already much discussed and it is almost certain that 4G will include several standards under a common umbrella, similarly to 3G, but with IEEE 802.xx wireless mobile networks included from the beginning. The main contribution of this paper is definition of 5G (Fifth Generation) mobile network concept, which is seen as user-centric concept instead of operator-centric as in 3G or service-centric concept as seen for 4G. In the proposed concept the mobile user is on the top of all. The 5G terminals will have software defined radios and modulation scheme as well as new error-control schemes can be downloaded from the Internet on the run. The development is seen towards the user terminals as a focus of the 5G mobile networks. The terminals will have access to different wireless technologies at the same time and the terminal should be able to combine different flows from different technologies. Each network will be responsible for handling user-mobility, while the terminal will make the final choice among different wireless/mobile access network providers for a given service. The paper also proposes intelligent Internet phone concept where the mobile phone can choose the best connections by selected constraints and dynamically change them during a single end-to-end connection. The proposal in this paper is fundamental shift in the mobile networking philosophy compared to existing 3G and near-soon 4G mobile technologies, and this concept is called here the 5G.", "title": "" }, { "docid": "68a0298286210e50240557222468c4d3", "text": "As the take-up of Long Term Evolution (LTE)/4G cellular accelerates, there is increasing interest in technologies that will define the next generation (5G) telecommunication standard. This article identifies several emerging technologies which will change and define the future generations of telecommunication standards. Some of these technologies are already making their way into standards such as 3GPP LTE, while others are still in development. Additionally, we will look at some of the research problems that these new technologies pose.", "title": "" } ]
[ { "docid": "cf609c174c70295ef57995f662ceda50", "text": "Upper limb exercise is often neglected during post-stroke rehabilitation. Video games have been shown to be useful in providing environments in which patients can practise repetitive, functionally meaningful movements, and in inducing neuroplasticity. The design of video games is often focused upon a number of fundamental principles, such as reward, goals, challenge and the concept of meaningful play, and these same principles are important in the design of games for rehabilitation. Further to this, there have been several attempts for the strengthening of the relationship between commercial game design and rehabilitative game design, the former providing insight into factors that can increase motivation and engagement with the latter. In this article, we present an overview of various game design principles and the theoretical grounding behind their presence, in addition to attempts made to utilise these principles in the creation of upper limb stroke rehabilitation systems and the outcomes of their use. We also present research aiming to move the collaborative efforts of designers and therapists towards a model for the structured design of these games and the various steps taken concerning the theoretical classification and mapping of game design concepts with intended cognitive and motor outcomes.", "title": "" }, { "docid": "446af0ad077943a77ac4a38fd84df900", "text": "We investigate the manufacturability of 20-nm double-gate and FinFET devices in integrated circuits by projecting process tolerances. Two important factors affecting the sensitivity of device electrical parameters to physical variations were quantitatively considered. The quantum effect was computed using the density gradient method and the sensitivity of threshold voltage to random dopant fluctuation was studied by Monte Carlo simulation. Our results show the 3 value ofVT variation caused by discrete impurity fluctuation can be greater than 100%. Thus, engineering the work function of gate materials and maintaining a nearly intrinsic channel is more desirable. Based on a design with an intrinsic channel and ideal gate work function, we analyzed the sensitivity of device electrical parameters to several important physical fluctuations such as the variations in gate length, body thickness, and gate dielectric thickness. We found that quantum effects have great impact on the performance of devices. As a result, the device electrical behavior is sensitive to small variations of body thickness. The effect dominates over the effects produced by other physical fluctuations. To achieve a relative variation of electrical parameters comparable to present practice in industry, we face a challenge of fin width control (less than 1 nm 3 value of variation) for the 20-nm FinFET devices. The constraint of the gate length variation is about 10 15%. We estimate a tolerance of 1 2 A 3 value of oxide thickness variation and up to 30% front-back oxide thickness mismatch.", "title": "" }, { "docid": "6e75a3e63c515f97b6ab9c68d1f77d2c", "text": "This paper explores the use of multiple models in performing question answering tasks on the Stanford Question Answering Database. We first implement and share results of a baseline model using bidirectional long short-term memory (BiLSTM) encoding of question and context followed a simple co-attention model [1]. We then report on the use of match-LSTM and Pointer Net which showed marked improvements in question answering over the baseline model [2]. Lastly, we extend the model by adding Dropout [3] and randomization strategies to account for unknown tokens. Final test score on Codalab under Username: yife, F1: 66.981, EM: 56.845.", "title": "" }, { "docid": "f6529f9327f72d77d36e2002d97cfdf6", "text": "The history of machine translation is described from its beginnings in the 1940s to the present day. In the earliest years, efforts were concentrated either on developing immediately useful systems, however crude in their translation quality, or on fundamental research for high quality translation systems. After the ALPAC report in 1966, which virtually ended MT research in the US for more than a decade, research focussed on the development of systems requiring human assistance for producing translations of technical documentation, on translation tools for direct use by translators themselves, and, in recent years, on systems for translating email, Web pages and other Internet documentation, where poor quality is acceptable in the interest of rapid results.", "title": "" }, { "docid": "db4499bc08d0ed24f81e9412d8869d37", "text": "In this paper we assess our progress toward creating a virtual human negotiation agent with fluid turn-taking skills. To facilitate the design of this agent, we have collected a corpus of human-human negotiation roleplays as well as a corpus of Wizard-controlled human-agent negotiations in the same roleplay scenario. We compare the natural turn-taking behavior in our human-human corpus with that achieved in our Wizard-of-Oz corpus, and quantify our virtual human’s turn-taking skills using a combination of subjective and objective metrics. We also discuss our design for a Wizard user interface to support real-time control of the virtual human’s turntaking and dialogue behavior, and analyze our wizard’s usage of this interface.", "title": "" }, { "docid": "6d809270c7fbcf5b4b3c1a3c71026c3f", "text": "Requirements defects have a major impact throughout the whole software lifecycle. Having a specific defects classification for requirements is important to analyse the root causes of problems, build checklists that support requirements reviews and to reduce risks associated with requirements problems. In our research we analyse several defects classifiers; select the ones applicable to requirements specifications, following rules to build defects taxonomies; and assess the classification validity in an experiment of requirements defects classification performed by graduate and undergraduate students. Not all subjects used the same type of defect to classify the same defect, which suggests that defects classification is not consensual. Considering our results we give recommendations to industry and other researchers on the design of classification schemes and treatment of classification results.", "title": "" }, { "docid": "c3b2ef6d7010d7c08c314ddfdc2780c4", "text": "research and development dollars for decades now, and people are beginning to ask hard questions: What really works? What are the limits? What doesn’t work as advertised? What isn’t likely to work? What isn’t affordable? This article holds a mirror up to the community, both to provide feedback and stimulate more selfassessment. The significant accomplishments and strengths of the field are highlighted. The research agenda, strategy, and heuristics are reviewed, and a change of course is recommended to improve the field’s ability to produce reusable and interoperable components.", "title": "" }, { "docid": "8f967b0a46e3dad8f39476b2efea48b7", "text": "Today’s rapid changing world highlights the influence and impact of technology in all aspects of learning life. Higher Education institutions in developed Western countries believe that these developments offer rich opportunities to embed technological innovations within the learning environment. This places developing countries, striving to be equally competitive in international markets, under tremendous pressure to similarly embed appropriate blends of technologies within their learning and curriculum approaches, and consequently enhance and innovate their learning experiences. Although many universities across the world have incorporated internet-based learning systems, the success of their implementation requires an extensive understanding of the end user acceptance process. Learning using technology has become a popular approach within higher education institutions due to the continuous growth of Internet innovations and technologies. Therefore, this paper focuses on the investigation of students, who attempt to successfully adopt e-learning systems at universities in Jordan. The conceptual research framework of e-learning adoption, which is used in the analysis, is based on the technology acceptance model. The study also provides an indicator of students’ acceptance of e-learning as well as identifying the important factors that would contribute to its successful use. The outcomes will enrich the understanding of students’ acceptance of e-learning and will assist in its continuing implementation at Jordanian Universities.", "title": "" }, { "docid": "20af5209de71897158820f935018d877", "text": "This paper presents a new bag-of-entities representation for document ranking, with the help of modern knowledge bases and automatic entity linking. Our system represents query and documents by bag-of-entities vectors constructed from their entity annotations, and ranks documents by their matches with the query in the entity space. Our experiments with Freebase on TREC Web Track datasets demonstrate that current entity linking systems can provide sufficient coverage of the general domain search task, and that bag-of-entities representations outperform bag-of-words by as much as 18% in standard document ranking tasks.", "title": "" }, { "docid": "4d0889329f9011adc05484382e4f5dc0", "text": "A high level of sustained personal plaque control is fundamental for successful treatment outcomes in patients with active periodontal disease and, hence, oral hygiene instructions are the cornerstone of periodontal treatment planning. Other risk factors for periodontal disease also should be identified and modified where possible. Many restorative dental treatments in particular require the establishment of healthy periodontal tissues for their clinical success. Failure by patients to control dental plaque because of inappropriate designs and materials for restorations and prostheses will result in the long-term failure of the restorations and the loss of supporting tissues. Periodontal treatment planning considerations are also very relevant to endodontic, orthodontic and osseointegrated dental implant conditions and proposed therapies.", "title": "" }, { "docid": "23527243a9ccb9feaa24ccc7ac38f05d", "text": "BACKGROUND\nElectrosurgical units are the most common type of electrical equipment in the operating room. A basic understanding of electricity is needed to safely apply electrosurgical technology for patient care.\n\n\nMETHODS\nWe reviewed the literature concerning the essential biophysics, the incidence of electrosurgical injuries, and the possible mechanisms for injury. Various safety guidelines pertaining to avoidance of injuries were also reviewed.\n\n\nRESULTS\nElectrothermal injury may result from direct application, insulation failure, direct coupling, capacitive coupling, and so forth.\n\n\nCONCLUSION\nA thorough knowledge of the fundamentals of electrosurgery by the entire team in the operating room is essential for patient safety and for recognizing potential complications. Newer hemostatic technologies can be used to decrease the incidence of complications.", "title": "" }, { "docid": "fba3c3a0fbc08c992d388e6854890b01", "text": "This paper presents a revenue maximisation model for sales channel allocation based on dynamic programming. It helps the media seller to determine how to distribute the sales volume of page views between guaranteed and nonguaranteed channels for display advertising. The model can algorithmically allocate and price the future page views via standardised guaranteed contracts in addition to real-time bidding (RTB). This is one of a few studies that investigates programmatic guarantee (PG) with posted prices. Several assumptions are made for media buyers’ behaviour, such as risk-aversion, stochastic demand arrivals, and time and price effects. We examine our model with an RTB dataset and find it increases the seller’s expected total revenue by adopting different pricing and allocation strategies depending the level of competition in RTB campaigns. The insights from this research can increase the allocative efficiency of the current media sellers’ sales mechanism and thus improve their revenue.", "title": "" }, { "docid": "a084e7dd5485e01d97ccf628bc00d644", "text": "A novel concept called gesture-changeable under-actuated (GCUA) function is proposed to improve the dexterities of traditional under-actuated hands and reduce the control difficulties of dexterous hands. Based on the GCUA function, a new humanoid robot hand, GCUA Hand is designed and manufactured. The GCUA Hand can grasp different objects self-adaptively and change its initial gesture dexterously before contacting objects. The hand has 5 fingers and 15 DOFs, each finger is based on screw-nut transmission, flexible drawstring constraint and belt-pulley under-actuated mechanism to realize GCUA function. The analyses on grasping static forces and grasping stabilities are put forward. The analyses and Experimental results show that the GCUA function is very nice and valid. The hands with the GCUA function can meet the requirements of grasping and operating with lower control and cost, which is the middle road between traditional under-actuated hands and dexterous hands.", "title": "" }, { "docid": "33c113db245fb36c3ce8304be9909be6", "text": "Bring Your Own Device (BYOD) is growing in popularity. In fact, this inevitable and unstoppable trend poses new security risks and challenges to control and manage corporate networks and data. BYOD may be infected by viruses, spyware or malware that gain access to sensitive data. This unwanted access led to the disclosure of information, modify access policy, disruption of service, loss of productivity, financial issues, and legal implications. This paper provides a review of existing literature concerning the access control and management issues, with a focus on recent trends in the use of BYOD. This article provides an overview of existing research articles which involve access control and management issues, which constitute of the recent rise of usage of BYOD devices. This review explores a broad area concerning information security research, ranging from management to technical solution of access control in BYOD. The main aim for this is to investigate the most recent trends touching on the access control issues in BYOD concerning information security and also to analyze the essential and comprehensive requirements needed to develop an access control framework in the future. Keywords— Bring Your Own Device, BYOD, access control, policy, security.", "title": "" }, { "docid": "d45b084040e5f07d39f622fc3543e10b", "text": "Low-shot learning methods for image classification support learning from sparse data. We extend these techniques to support dense semantic image segmentation. Specifically, we train a network that, given a small set of annotated images, produces parameters for a Fully Convolutional Network (FCN). We use this FCN to perform dense pixel-level prediction on a test image for the new semantic class. Our architecture shows a 25% relative meanIoU improvement compared to the best baseline methods for one-shot segmentation on unseen classes in the PASCAL VOC 2012 dataset and is at least 3× faster. The code is publicly available at: https://github.com/lzzcd001/OSLSM.", "title": "" }, { "docid": "db866d876dddb61c4da3ff554e5b6643", "text": "Distributed stream processing systems need to support stateful processing, recover quickly from failures to resume such processing, and reprocess an entire data stream quickly. We present Apache Samza, a distributed system for stateful and fault-tolerant stream processing. Samza utilizes a partitioned local state along with a low-overhead background changelog mechanism, allowing it to scale to massive state sizes (hundreds of TB) per application. Recovery from failures is sped up by re-scheduling based on Host Affinity. In addition to processing infinite streams of events, Samza supports processing a finite dataset as a stream, from either a streaming source (e.g., Kafka), a database snapshot (e.g., Databus), or a file system (e.g. HDFS), without having to change the application code (unlike the popular Lambdabased architectures which necessitate maintenance of separate code bases for batch and stream path processing). Samza is currently in use at LinkedIn by hundreds of production applications with more than 10, 000 containers. Samza is an open-source Apache project adopted by many top-tier companies (e.g., LinkedIn, Uber, Netflix, TripAdvisor, etc.). Our experiments show that Samza: a) handles state efficiently, improving latency and throughput by more than 100× compared to using a remote storage; b) provides recovery time independent of state size; c) scales performance linearly with number of containers; and d) supports reprocessing of the data stream quickly and with minimal interference on real-time traffic.", "title": "" }, { "docid": "c4367db5e4f46a7c58e11c2fbb629f90", "text": "Microblogging data is growing at a rapid pace. This poses new challenges to the data management systems, such as graph databases, that are typically suitable for analyzing such data. In this paper, we share our experience on executing a wide variety of micro-blogging queries on two popular graph databases: Neo4j and Sparksee. Our queries are designed to be relevant to popular applications of micro-blogging data. The queries are executed on a large real graph data set comprising of nearly 50 million nodes and 326 million edges.", "title": "" }, { "docid": "d5b51e2d90b52fed0712db7dad6602c9", "text": "Due to the rapid increase in world population, the waste of food and resources, and non-sustainable food production practices, the use of alternative food sources is currently strongly promoted. In this perspective, insects may represent a valuable alternative to main animal food sources due to their nutritional value and sustainable production. However, edible insects may be perceived as an unappealing food source and are indeed rarely consumed in developed countries. The food safety of edible insects can thus contribute to the process of acceptance of insects as an alternative food source, changing the perception of developed countries regarding entomophagy. In the present study, the levels of organic contaminants (i.e. flame retardants, PCBs, DDT, dioxin compounds, pesticides) and metals (As, Cd, Co, Cr, Cu, Ni, Pb, Sn, Zn) were investigated in composite samples of several species of edible insects (greater wax moth, migratory locust, mealworm beetle, buffalo worm) and four insect-based food items currently commercialized in Belgium. The organic chemical mass fractions were relatively low (PCBs: 27-2065 pg/g ww; OCPs: 46-368 pg/g ww; BFRs: up to 36 pg/g ww; PFRs 783-23800 pg/g ww; dioxin compounds: up to 0.25 pg WHO-TEQ/g ww) and were generally lower than those measured in common animal products. The untargeted screening analysis revealed the presence of vinyltoluene, tributylphosphate (present in 75% of the samples), and pirimiphos-methyl (identified in 50% of the samples). The levels of Cu and Zn in insects were similar to those measured in meat and fish in other studies, whereas As, Co, Cr, Pb, Sn levels were relatively low in all samples (<0.03 mg/kg ww). Our results support the possibility to consume these insect species with no additional hazards in comparison to the more commonly consumed animal products.", "title": "" }, { "docid": "b2db53f203f2b168ec99bd8e544ff533", "text": "BACKGROUND\nThis study aimed to analyze the scientific outputs of esophageal and esophagogastric junction (EGJ) cancer and construct a model to quantitatively and qualitatively evaluate pertinent publications from the past decade.\n\n\nMETHODS\nPublications from 2007 to 2016 were retrieved from the Web of Science Core Collection database. Microsoft Excel 2016 (Redmond, WA) and the CiteSpace (Drexel University, Philadelphia, PA) software were used to analyze publication outcomes, journals, countries, institutions, authors, research areas, and research frontiers.\n\n\nRESULTS\nA total of 12,978 publications on esophageal and EGJ cancer were identified published until March 23, 2017. The Journal of Clinical Oncology had the largest number of publications, the USA was the leading country, and the University of Texas MD Anderson Cancer Center was the leading institution. Ajani JA published the most papers, and Jemal A had the highest co-citation counts. Esophageal squamous cell carcinoma ranked the first in research hotspots, and preoperative chemotherapy/chemoradiotherapy ranked the first in research frontiers.\n\n\nCONCLUSION\nThe annual number of publications steadily increased in the past decade. A considerable number of papers were published in journals with high impact factor. Many Chinese institutions engaged in esophageal and EGJ cancer research but significant collaborations among them were not noted. Jemal A, Van Hagen P, Cunningham D, and Enzinger PC were identified as good candidates for research collaboration. Neoadjuvant therapy and genome-wide association study in esophageal and EGJ cancer research should be closely observed.", "title": "" }, { "docid": "e88f19cdd7f21c5aafedc13143bae00f", "text": "For a long time, the term virtualization implied talking about hypervisor-based virtualization. However, in the past few years container-based virtualization got mature and especially Docker gained a lot of attention. Hypervisor-based virtualization provides strong isolation of a complete operating system whereas container-based virtualization strives to isolate processes from other processes at little resource costs. In this paper, hypervisor and container-based virtualization are differentiated and the mechanisms behind Docker and LXC are described. The way from a simple chroot over a container framework to a ready to use container management solution is shown and a look on the security of containers in general is taken. This paper gives an overview of the two different virtualization approaches and their advantages and disadvantages.", "title": "" } ]
scidocsrr
19cb7e29919bb9336b151b313d42c4ef
Approximate fair bandwidth allocation: A method for simple and flexible traffic management
[ { "docid": "740daa67e29636ac58d6f3fa48bd51ba", "text": "Status of Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract This memo presents two recommendations to the Internet community concerning measures to improve and preserve Internet performance. It presents a strong recommendation for testing, standardization, and widespread deployment of active queue management in routers, to improve the performance of today's Internet. It also urges a concerted effort of research, measurement, and ultimate deployment of router mechanisms to protect the Internet from flows that are not sufficiently responsive to congestion notification.", "title": "" } ]
[ { "docid": "c629d3588203af2e328fb116c836bb8c", "text": "The purpose of this study was to clinically and radiologically compare the utility, osteoconductivity, and absorbability of hydroxyapatite (HAp) and beta-tricalcium phosphate (TCP) spacers in medial open-wedge high tibial osteotomy (HTO). Thirty-eight patients underwent medial open-wedge HTO with a locking plate. In the first 19 knees, a HAp spacer was implanted in the opening space (HAp group). In the remaining 19 knees, a TCP spacer was implanted in the same manner (TCP group). All patients underwent clinical and radiological examinations before surgery and at 18 months after surgery. Concerning the background factors, there were no statistical differences between the two groups. Post-operatively, the knee score significantly improved in each group. Concerning the post-operative knee alignment and clinical outcome, there was no statistical difference in each parameter between the two groups. Regarding the osteoconductivity, the modified van Hemert’s score of the TCP group was significantly higher (p = 0.0009) than that of the HAp group in the most medial osteotomy zone. The absorption rate was significantly greater in the TCP group than in the HAp group (p = 0.00039). The present study demonstrated that a TCP spacer was significantly superior to a HAp spacer concerning osteoconductivity and absorbability at 18 months after medial open-wedge HTO. Retrospective comparative study, Level III.", "title": "" }, { "docid": "85221954ced857c449acab8ee5cf801e", "text": "IMSI Catchers are used in mobile networks to identify and eavesdrop on phones. When, the number of vendors increased and prices dropped, the device became available to much larger audiences. Self-made devices based on open source software are available for about US$ 1,500.\n In this paper, we identify and describe multiple methods of detecting artifacts in the mobile network produced by such devices. We present two independent novel implementations of an IMSI Catcher Catcher (ICC) to detect this threat against everyone's privacy. The first one employs a network of stationary (sICC) measurement units installed in a geographical area and constantly scanning all frequency bands for cell announcements and fingerprinting the cell network parameters. These rooftop-mounted devices can cover large areas. The second implementation is an app for standard consumer grade mobile phones (mICC), without the need to root or jailbreak them. Its core principle is based upon geographical network topology correlation, facilitating the ubiquitous built-in GPS receiver in today's phones and a network cell capabilities fingerprinting technique. The latter works for the vicinity of the phone by first learning the cell landscape and than matching it against the learned data. We implemented and evaluated both solutions for digital self-defense and deployed several of the stationary units for a long term field-test. Finally, we describe how to detect recently published denial of service attacks.", "title": "" }, { "docid": "a34e153e5027a1483fd25c3ff3e1ea0c", "text": "In this paper, we study how to initialize the convolutional neural network (CNN) model for training on a small dataset. Specially, we try to extract discriminative filters from the pre-trained model for a target task. On the basis of relative entropy and linear reconstruction, two methods, Minimum Entropy Loss (MEL) and Minimum Reconstruction Error (MRE), are proposed. The CNN models initialized by the proposed MEL and MRE methods are able to converge fast and achieve better accuracy. We evaluate MEL and MRE on the CIFAR10, CIFAR100, SVHN, and STL-10 public datasets. The consistent performances demonstrate the advantages of the proposed methods.", "title": "" }, { "docid": "a645943a02f5d71b146afe705fb6f49f", "text": "Along with the developments in the field of information technologies, the data in the electronic environment is increasing. Data mining methods are needed to obtain useful information for users in electronic environment. One of these methods, clustering methods, aims to group data according to common properties. This grouping is often based on the distance between the data. Clustering methods are divided into hierarchical and non-hierarchical methods according to the fragmentation technique of clusters. The success of both types of clustering methods varies according to the data set applied. In this study, both types of methods were tested on different type of data sets. Selected methods compared according to five different evaluation metrics. The results of the analysis are presented comparatively at the end of the study and which methods are more convenient for data set is explained.", "title": "" }, { "docid": "5168f7f952d937460d250c44b43f43c0", "text": "This letter presents the design of a coplanar waveguide (CPW) circularly polarized antenna for the central frequency 900 MHz, it comes in handy for radio frequency identification (RFID) short-range reading applications within the band of 902-928 MHz where the axial ratio of proposed antenna model is less than 3 dB. The proposed design has an axial-ratio bandwidth of 36 MHz (4%) and impedance bandwidth of 256 MHz (28.5%).", "title": "" }, { "docid": "17dce24f26d7cc196e56a889255f92a8", "text": "As known, to finish this book, you may not need to get it at once in a day. Doing the activities along the day may make you feel so bored. If you try to force reading, you may prefer to do other entertaining activities. But, one of concepts we want you to have this book is that it will not make you feel bored. Feeling bored when reading will be only unless you don't like the book. computational principles of mobile robotics really offers what everybody wants.", "title": "" }, { "docid": "ae7117416b4a07d2b15668c2c8ac46e3", "text": "We present OntoWiki, a tool providing support for agile, distributed knowledge engineering scenarios. OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. It fosters social collaboration aspects by keeping track of changes, allowing comments and discussion on every single part of a knowledge base, enabling to rate and measure the popularity of content and honoring the activity of users. OntoWiki enhances the browsing and retrieval by offering semantic enhanced search strategies. All these techniques are applied with the ultimate goal of decreasing the entrance barrier for projects and domain experts to collaborate using semantic technologies. In the spirit of the Web 2.0 OntoWiki implements an ”architecture of participation” that allows users to add value to the application as they use it. It is available as open-source software and a demonstration platform can be accessed at http://3ba.se.", "title": "" }, { "docid": "0289858bb9002e00d753e1ed2da8b204", "text": "This paper presents a motion planning method for mobile manipulators for which the base locomotion is less precise than the manipulator control. In such a case, it is advisable to move the base to discrete poses from which the manipulator can be deployed to cover a prescribed trajectory. The proposed method finds base poses that not only cover the trajectory but also meet constraints on a measure of manipulability. We propose a variant of the conventional manipulability measure that is suited to the trajectory control of the end effector of the mobile manipulator along an arbitrary curve in three space. Results with implementation on a mobile manipulator are discussed.", "title": "" }, { "docid": "19621b0ab08cb0abed04b859331d8092", "text": "The objective of designing a strategy for an institution is to create more value and achieve its vision, with clear and coherent strategies, identifying the conditions in which they are currently, the sector in which they work and the different types of competences that generate, as well as the market in general where they perform, to create this type of conditions requires the availability of strategic information to verify the current conditions, to define the strategic line to follow according to internal and external factors, and in this way decide which methods to use to implement the development of a strategy in the organization. This research project was developed in an institution of higher education where the strategic processes were analyzed from different perspectives i.e. financial, customers, internal processes, and training and learning using business intelligence tools, such as Excel Power BI, Power Pivot, Power Query and a relational database for data repository; which helped having agile and effective information for the creation of the balanced scorecard, involving all levels of the organization and academic units; operating key performance indicators (KPI’s), for operational and strategic decisions. The results were obtained in form of boards of indicators designed to be visualized in the final view of the software constructed with previously described software tools. Keywords—Business intelligence; balanced scorecard; key performance indicators; BI Tools", "title": "" }, { "docid": "0e5111addf4a6d5f0cad92707d6b7173", "text": "We present a novel model based stereo system, which accurately extracts the 3D shape and pose of faces from multiple images taken simultaneously. Extracting the 3D shape from images is important in areas such as pose-invariant face recognition and image manipulation. The method is based on a 3D morphable face model learned from a database of facial scans. The use of a strong face prior allows us to extract high precision surfaces from stereo data of faces, where traditional correlation based stereo methods fail because of the mostly textureless input images. The method uses two or more uncalibrated images of arbitrary baseline, estimating calibration and shape simultaneously. Results using two and three input images are presented. We replace the lighting and albedo estimation of a monocular method with the use of stereo information, making the system more accurate and robust. We evaluate the method using ground truth data and the standard PIE image dataset. A comparison with the state of the art monocular system shows that the new method has a significantly higher accuracy.", "title": "" }, { "docid": "85012f6ad9aa8f3e80a9c971076b4eb9", "text": "The article aims to introduce an integrated web-based interactive data platform for molecular dynamic simulations using the datasets generated by different life science communities from Armenia. The suggested platform, consisting of data repository and workflow management services, is vital for current and future scientific discoveries in the life science domain. We focus on interactive data visualization workflow service as a key to perform more in-depth analyzes of research data outputs, helping to understand the problems efficiently and to consolidate the data into one collective illustration platform. The functionalities of the integrated data platform is presented as an advanced integrated environment to capture, analyze, process and visualize the scientific data.", "title": "" }, { "docid": "8d208bb5318dcbc5d941df24906e121f", "text": "Applications based on eye-blink detection have increased, as a result of which it is essential for eye-blink detection to be robust and non-intrusive irrespective of the changes in the user's facial pose. However, most previous studies on camera-based blink detection have the disadvantage that their performances were affected by the facial pose. They also focused on blink detection using only frontal facial images. To overcome these disadvantages, we developed a new method for blink detection, which maintains its accuracy despite changes in the facial pose of the subject. This research is novel in the following four ways. First, the face and eye regions are detected by using both the AdaBoost face detector and a Lucas-Kanade-Tomasi (LKT)-based method, in order to achieve robustness to facial pose. Secondly, the determination of the state of the eye (being open or closed), needed for blink detection, is based on two features: the ratio of height to width of the eye region in a still image, and the cumulative difference of the number of black pixels of the eye region using an adaptive threshold in successive images. These two features are robustly extracted irrespective of the lighting variations by using illumination normalization. Thirdly, the accuracy of determining the eye state - open or closed - is increased by combining the above two features on the basis of the support vector machine (SVM). Finally, the SVM classifier for determining the eye state is adaptively selected according to the facial rotation. Experimental results using various databases showed that the blink detection by the proposed method is robust to various facial poses.", "title": "" }, { "docid": "ae9469b80390e5e2e8062222423fc2cd", "text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.", "title": "" }, { "docid": "ec26505d813ed98ac3f840ea54358873", "text": "In this paper we address cardinality estimation problem which is an important subproblem in query optimization. Query optimization is a part of every relational DBMS responsible for finding the best way of the execution for the given query. These ways are called plans. The execution time of different plans may differ by several orders, so query optimizer has a great influence on the whole DBMS performance. We consider cost-based query optimization approach as the most popular one. It was observed that costbased optimization quality depends much on cardinality estimation quality. Cardinality of the plan node is the number of tuples returned by it. In the paper we propose a novel cardinality estimation approach with the use of machine learning methods. The main point of the approach is using query execution statistics of the previously executed queries to improve cardinality estimations. We called this approach adaptive cardinality estimation to reflect this point. The approach is general, flexible, and easy to implement. The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times.", "title": "" }, { "docid": "73e616ebf26c6af34edb0d60a0ce1773", "text": "While recent deep neural networks have achieved a promising performance on object recognition, they rely implicitly on the visual contents of the whole image. In this paper, we train deep neural networks on the foreground (object) and background (context) regions of images respectively. Considering human recognition in the same situations, networks trained on the pure background without objects achieves highly reasonable recognition performance that beats humans by a large margin if only given context. However, humans still outperform networks with pure object available, which indicates networks and human beings have different mechanisms in understanding an image. Furthermore, we straightforwardly combine multiple trained networks to explore different visual cues learned by different networks. Experiments show that useful visual hints can be explicitly learned separately and then combined to achieve higher performance, which verifies the advantages of the proposed framework.", "title": "" }, { "docid": "0952701dd63326f8a78eb5bc9a62223f", "text": "The self-organizing map (SOM) is an automatic data-analysis method. It is widely applied to clustering problems and data exploration in industry, finance, natural sciences, and linguistics. The most extensive applications, exemplified in this paper, can be found in the management of massive textual databases and in bioinformatics. The SOM is related to the classical vector quantization (VQ), which is used extensively in digital signal processing and transmission. Like in VQ, the SOM represents a distribution of input data items using a finite set of models. In the SOM, however, these models are automatically associated with the nodes of a regular (usually two-dimensional) grid in an orderly fashion such that more similar models become automatically associated with nodes that are adjacent in the grid, whereas less similar models are situated farther away from each other in the grid. This organization, a kind of similarity diagram of the models, makes it possible to obtain an insight into the topographic relationships of data, especially of high-dimensional data items. If the data items belong to certain predetermined classes, the models (and the nodes) can be calibrated according to these classes. An unknown input item is then classified according to that node, the model of which is most similar with it in some metric used in the construction of the SOM. A new finding introduced in this paper is that an input item can even more accurately be represented by a linear mixture of a few best-matching models. This becomes possible by a least-squares fitting procedure where the coefficients in the linear mixture of models are constrained to nonnegative values.", "title": "" }, { "docid": "83fda0277ebcdb6aeae216a38553db9c", "text": "Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational inference algorithms requires tedious model-specific calculations; this makes it di cult for non-experts to use. We propose an automatic variational inference algorithm, automatic di erentiation variational inference ( ); we implement it in Stan (code available), a probabilistic programming system. In the user provides a Bayesian model and a dataset, nothing else. We make no conjugacy assumptions and support a broad class of models. The algorithm automatically determines an appropriate variational family and optimizes the variational objective. We compare to sampling across hierarchical generalized linear models, nonconjugate matrix factorization, and a mixture model. We train the mixture model on a quarter million images. With we can use variational inference on any model we write in Stan.", "title": "" }, { "docid": "3550dbe913466a675b621d476baba219", "text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.", "title": "" }, { "docid": "e67d09b3bf155c5191ad241006e011ad", "text": "An effective approach for energy conservation in wireless sensor networks is scheduling sleep intervals for extraneous nodes, while the remaining nodes stay active to provide continuous service. For the sensor network to operate successfully, the active nodes must maintain both sensing coverage and network connectivity. Furthermore, the network must be able to configure itself to any feasible degrees of coverage and connectivity in order to support different applications and environments with diverse requirements. This paper presents the design and analysis of novel protocols that can dynamically configure a network to achieve guaranteed degrees of coverage and connectivity. This work differs from existing connectivity or coverage maintenance protocols in several key ways: 1) We present a Coverage Configuration Protocol (CCP) that can provide different degrees of coverage requested by applications. This flexibility allows the network to self-configure for a wide range of applications and (possibly dynamic) environments. 2) We provide a geometric analysis of the relationship between coverage and connectivity. This analysis yields key insights for treating coverage and connectivity in a unified framework: this is in sharp contrast to several existing approaches that address the two problems in isolation. 3) Finally, we integrate CCP with SPAN to provide both coverage and connectivity guarantees. We demonstrate the capability of our protocols to provide guaranteed coverage and connectivity configurations, through both geometric analysis and extensive simulations.", "title": "" } ]
scidocsrr
1fed19e9ce9c5752f552fd164ee8ec78
Contextualized Bilinear Attention Networks
[ { "docid": "86c998f5ffcddb0b74360ff27b8fead4", "text": "Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.", "title": "" } ]
[ { "docid": "527c4c17aadb23a991d85511004a7c4f", "text": "Accurate and robust recognition and prediction of traffic situation plays an important role in autonomous driving, which is a prerequisite for risk assessment and effective decision making. Although there exist a lot of works dealing with modeling driver behavior of a single object, it remains a challenge to make predictions for multiple highly interactive agents that react to each other simultaneously. In this work, we propose a generic probabilistic hierarchical recognition and prediction framework which employs a two-layer Hidden Markov Model (TLHMM) to obtain the distribution of potential situations and a learning-based dynamic scene evolution model to sample a group of future trajectories. Instead of predicting motions of a single entity, we propose to get the joint distribution by modeling multiple interactive agents as a whole system. Moreover, due to the decoupling property of the layered structure, our model is suitable for knowledge transfer from simulation to real world applications as well as among different traffic scenarios, which can reduce the computational efforts of training and the demand for a large data amount. A case study of highway ramp merging scenario is demonstrated to verify the effectiveness and accuracy of the proposed framework.", "title": "" }, { "docid": "c6054c39b9b36b5d446ff8da3716ec30", "text": "The Web is a constantly expanding global information space that includes disparate types of data and resources. Recent trends demonstrate the urgent need to manage the large amounts of data stream, especially in specific domains of application such as critical infrastructure systems, sensor networks, log file analysis, search engines and more recently, social networks. All of these applications involve large-scale data-intensive tasks, often subject to time constraints and space complexity. Algorithms, data management and data retrieval techniques must be able to process data stream, i.e., process data as it becomes available and provide an accurate response, based solely on the data stream that has already been provided. Data retrieval techniques often require traditional data storage and processing approach, i.e., all data must be available in the storage space in order to be processed. For instance, a widely used relevance measure is Term Frequency–Inverse Document Frequency (TF–IDF), which can evaluate how important a word is in a collection of documents and requires to a priori know the whole dataset. To address this problem, we propose an approximate version of the TF–IDF measure suitable to work on continuous data stream (such as the exchange of messages, tweets and sensor-based log files). The algorithm for the calculation of this measure makes two assumptions: a fast response is required, and memory is both limited and infinitely smaller than the size of the data stream. In addition, to face the great computational power required to process massive data stream, we present also a parallel implementation of the approximate TF–IDF calculation using Graphical Processing Units (GPUs). This implementation of the algorithm was tested on generated and real data stream and was able to capture the most frequent terms. Our results demonstrate that the approximate version of the TF–IDF measure performs at a level that is comparable to the solution of the precise TF–IDF measure. 2014 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "273bd38144d33aa215298ddd5cf674f2", "text": "Looking to increase the functionality of current wireless platforms and to improve their quality of service, we have explored the merits of using frequency-reconfigurable antennas as an alternative for multiband antennas. Our study included an analysis of various reconfigurable and multiband structures such as patches, wires, and combinations. Switches, such as radio-frequency microelectromechanical systems (RFMEMS) and p-i-n diodes, were also studied and directly incorporated onto antenna structures to successfully form frequency-reconfigurable antennas.", "title": "" }, { "docid": "a1bef11b10bc94f84914d103311a5941", "text": "Class imbalance and class overlap are two of the major problems in data mining and machine learning. Several studies have shown that these data complexities may affect the performance or behavior of artificial neural networks. Strategies proposed to face with both challenges have been separately applied. In this paper, we introduce a hybrid method for handling both class imbalance and class overlap simultaneously in multi-class learning problems. Experimental results on five remote sensing data show that the combined approach is a promising method. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5a13fa656b34d25fb53c707291721d04", "text": "Cloud computing is a popular model for accessing the computer resources. The data owner outsources their data on cloud server that can be accessed by an authorized user. In Cloud Computing public key encryption with equality test (PKEET) provides an alternative to public key encryption by simplify the public key and credential administration at Public Key Infrastructure (PKI). However it still faces the security risk in outsourced computation on encrypted data. Therefore this paper proposed a novel identity based hybrid encryption (RSA with ECC) to enhance the security of outsourced data. In this approach sender encrypts the sensitive data using hybrid algorithm. Then the proxy re encryption is used to encrypt the keyword and identity in standardize toward enrichment security of data.", "title": "" }, { "docid": "94c5f0bba64e131a64989813652846a5", "text": "The ability to access patents and relevant patent-related information pertaining to a patented technology can fundamentally transform the patent system and its functioning and patent institutions such as the USPTO and the federal courts. This paper describes an ontology-based computational framework that can resolve some of difficult issues in retrieving patents and patent related information for the legal and justice system.", "title": "" }, { "docid": "6b718717d5ecef343a8f8033803a55e6", "text": "BACKGROUND\nMedication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data.\n\n\nOBJECTIVE\nTo unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations.\n\n\nMETHODS\nWe have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types.\n\n\nRESULTS\nOur results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%.\n\n\nCONCLUSIONS\nIt shows that classical learning models (SVM) remains advantageous over deep learning models (RNN variants) for clinical relation identification, especially for long-distance intersentential relations. However, RNNs demonstrate a great potential of significant improvement if more training data become available. Our work is an important step toward mining EHRs to improve the efficacy of drug safety surveillance. Most importantly, the annotated data used in this study will be made publicly available, which will further promote drug safety research in the community.", "title": "" }, { "docid": "ec920015a3206a5d76e8ab3698ceab90", "text": "In this paper, we present a method for temporal relation extraction from clinical narratives in French and in English. We experiment on two comparable corpora, the MERLOT corpus for French and the THYME corpus for English, and show that a common approach can be used for both languages.", "title": "" }, { "docid": "da47eb6c793f4afff5aecf6f52194e12", "text": "An inline chalcogenide phase change RF switch utilizing germanium telluride (GeTe) and driven by an integrated, electrically isolated thin film heater for thermal actuation has been fabricated. A voltage or current pulse applied to the heater terminals was used to transition the phase change material between the crystalline and amorphous states. An on-state resistance of 1.2 Ω (0.036 Ω-mm), with an off-state capacitance and resistance of 18.1 fF and 112 kΩ respectively were measured. This results in an RF switch cut-off frequency (Fco) of 7.3 THz, and an off/on DC resistance ratio of 9 × 104. The heater pulse power required to switch the GeTe between the two states was as low as 0.5W, with zero power consumption during steady state operation, making it a non-volatile RF switch. To the authors' knowledge, this is the first reported implementation of an RF phase change switch in a 4-terminal, inline configuration.", "title": "" }, { "docid": "bfdfd911e913c4dbe7a01e775ae6f5bf", "text": "With the upgrowing of digital processing of images and film archiving, the need for assisted or unsupervised restoration required the development of a series of methods and techniques. Among them, image inpainting is maybe the most impressive and useful. Based on partial derivative equations or texture synthesis, many other hybrid techniques have been proposed recently. The need for an analytical comparison, beside the visual one, urged us to perform the studies shown in the present paper. Starting with an overview of the domain, an evaluation of the five methods was performed using a common benchmark and measuring the PSNR. Conclusions regarding the performance of the investigated algorithms have been presented, categorizing them in function of the restored image structure. Based on these experiments, we have proposed an adaptation of Oliveira's and Hadhoud's algorithms, which are performing well on images with natural defects.", "title": "" }, { "docid": "aa30fc0f921509b1f978aeda1140ffc0", "text": "Arithmetic coding provides an e ective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an e cient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible e ect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding.", "title": "" }, { "docid": "c29349c32074392e83f51b1cd214ec8a", "text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.", "title": "" }, { "docid": "7a6d32d50e3b1be70889fc85ffdcac45", "text": "Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.", "title": "" }, { "docid": "96f4f77f114fec7eca22d0721c5efcbe", "text": "Aggregation structures with explicit information, such as image attributes and scene semantics, are effective and popular for intelligent systems for assessing aesthetics of visual data. However, useful information may not be available due to the high cost of manual annotation and expert design. In this paper, we present a novel multi-patch (MP) aggregation method for image aesthetic assessment. Different from state-of-the-art methods, which augment an MP aggregation network with various visual attributes, we train the model in an end-to-end manner with aesthetic labels only (i.e., aesthetically positive or negative). We achieve the goal by resorting to an attention-based mechanism that adaptively adjusts the weight of each patch during the training process to improve learning efficiency. In addition, we propose a set of objectives with three typical attention mechanisms (i.e., average, minimum, and adaptive) and evaluate their effectiveness on the Aesthetic Visual Analysis (AVA) benchmark. Numerical results show that our approach outperforms existing methods by a large margin. We further verify the effectiveness of the proposed attention-based objectives via ablation studies and shed light on the design of aesthetic assessment systems.", "title": "" }, { "docid": "49e76ffb51f11339950005ddeef71f3e", "text": "Multichannel die probing increases test speed and lowers the overall cost of testing. A new high-density wafer probe card based on MEMS technology is presented in this paper. MEMS-based microtest-channels have been designed to establish high-speed low-resistance connectivity between the die-under-test and the tester at the wafer level. The proposed test scheme can be used to probe fine pitch pads and interconnects of a new generation of 3-D integrated circuits. The proposed MEMS probe, which is fabricated with two masks, supports \\(10^{6}\\) lifetime touchdowns. Measurement results using a prototype indicate that the proposed architecture can be used to conduct manufacturing tests up to 38.6 GHz with less than -1-dB insertion loss while maintaining 11.4-m\\(\\Omega \\) contact resistance. The measured return loss of the probe at 39.6 GHz is -12.05 dB.", "title": "" }, { "docid": "2aed918913e6b72603e3dfdfca710572", "text": "We investigate the task of building a domain aware chat system which generates intelligent responses in a conversation comprising of different domains. The domain in this case is the topic or theme of the conversation. To achieve this, we present DOM-Seq2Seq, a domain aware neural network model based on the novel technique of using domain-targeted sequence-to-sequence models (Sutskever et al., 2014) and a domain classifier. The model captures features from current utterance and domains of the previous utterances to facilitate the formation of relevant responses. We evaluate our model on automatic metrics and compare our performance with the Seq2Seq model.", "title": "" }, { "docid": "7368671d20b4f4b30a231d364eb501bc", "text": "In this article, we study the problem of Web user profiling, which is aimed at finding, extracting, and fusing the “semantic”-based user profile from the Web. Previously, Web user profiling was often undertaken by creating a list of keywords for the user, which is (sometimes even highly) insufficient for main applications. This article formalizes the profiling problem as several subtasks: profile extraction, profile integration, and user interest discovery. We propose a combination approach to deal with the profiling tasks. Specifically, we employ a classification model to identify relevant documents for a user from the Web and propose a Tree-Structured Conditional Random Fields (TCRF) to extract the profile information from the identified documents; we propose a unified probabilistic model to deal with the name ambiguity problem (several users with the same name) when integrating the profile information extracted from different sources; finally, we use a probabilistic topic model to model the extracted user profiles, and construct the user interest model. Experimental results on an online system show that the combination approach to different profiling tasks clearly outperforms several baseline methods. The extracted profiles have been applied to expert finding, an important application on the Web. Experiments show that the accuracy of expert finding can be improved (ranging from +6% to +26% in terms of MAP) by taking advantage of the profiles.", "title": "" }, { "docid": "9352d3d38094cc083ab3958d42b4d69a", "text": "We performed a clinical study to evaluate the unawareness of dyskinesias in patients affected by Parkinson's disease (PD) and Huntington's disease (HD). Thirteen PD patients with levodopa-induced dyskinesias and 9 HD patients were enrolled. Patients were asked to evaluate the presence of dyskinesias while performing specific motor tasks. The Abnormal Involuntary Movement Scale (AIMS) and Goetz dyskinesia rating scale were administered to determine the severity of dyskinesias. The Unified Parkinson's disease rating scale (UPDRS) and Unified Huntington's Disease Rating Scale (UHDRS) were used in PD and HD patients, respectively. In PD we found a significant negative relationship between unawareness score at hand pronation-supination and AIMS score for upper limbs. In HD we found a significant positive relationship between total unawareness score and disease duration. In PD the unawareness seems to be inversely related with severity of dyskinesias, while in HD it is directly related to disease duration and severity.", "title": "" }, { "docid": "f13cbc36f2c51c5735185751ddc2500e", "text": "This paper presents an overview of the road and traffic sign detection and recognition. It describes the characteristics of the road signs, the requirements and difficulties behind road signs detection and recognition, how to deal with outdoor images, and the different techniques used in the image segmentation based on the colour analysis, shape analysis. It shows also the techniques used for the recognition and classification of the road signs. Although image processing plays a central role in the road signs recognition, especially in colour analysis, but the paper points to many problems regarding the stability of the received information of colours, variations of these colours with respect to the daylight conditions, and absence of a colour model that can led to a good solution. This means that there is a lot of work to be done in the field, and a lot of improvement can be achieved. Neural networks were widely used in the detection and the recognition of the road signs. The majority of the authors used neural networks as a recognizer, and as classifier. Some other techniques such as template matching or classical classifiers were also used. New techniques should be involved to increase the robustness, and to get faster systems for real-time applications.", "title": "" } ]
scidocsrr
66c7e4f0095288b88b5d120c51a4b519
Smart Home Communication Technologies and Applications : Wireless Protocol Assessment for Home Area Network Resources
[ { "docid": "78abbde692e13c6075269ac82b3f1123", "text": "Smart Metering is one of the key issues in modern energy efficiency technologies. Several efforts have been recently made in developing suitable communication protocols for metering data management and transmission, and the Metering-Bus (M-Bus) is a relevant standard example, with a wide diffusion in the European market. This paper deals with its wireless evolution, namely Wireless M-Bus (WM-Bus), and in particular looks at it from the energy consumption perspective. Indeed, specially in those applicative scenarios where the grid powering is not available, like in water and gas metering settings, it is fundamental to guarantee the sustainability of the meter itself, by means of long-life batteries or suitable energy harvesting technologies. The present work analyzes all these aspects directly referring to a specific HW/SW implementation of the WM-Bus variants, providing some useful guidelines for its application in the smart water grid context.", "title": "" }, { "docid": "4a8c9a2301ea45d6c18ec5ab5a75a2ba", "text": "We propose in this paper a computer vision-based posture recognition method for home monitoring of the elderly. The proposed system performs human detection prior to the posture analysis; posture recognition is performed only on a human silhouette. The human detection approach has been designed to be robust to different environmental stimuli. Thus, posture is analyzed with simple and efficient features that are not designed to manage constraints related to the environment but only designed to describe human silhouettes. The posture recognition method, based on fuzzy logic, identifies four static postures and is robust to variation in the distance between the camera and the person, and to the person's morphology. With an accuracy of 74.29% of satisfactory posture recognition, this approach can detect emergency situations such as a fall within a health smart home.", "title": "" } ]
[ { "docid": "307267213b63577ce020cf206d0ea5e0", "text": "Note. This article has been co-published in the British Journal of Sports Medicine (doi:10.1136/bjsports-2018-099193). Mountjoy is with the Department of Family Medicine, Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Canada. Sundgot-Borgen is with the Department of Sports Medicine, The Norwegian School of Sport Sciences, Oslo, Norway. Burke is with Sports Nutrition, Australian Institute of Sport, Belconnen, Australia, and Centre for Exercise and Nutrition, Mary MacKillop Institute for Health Research, Melbourne, Australia. Ackerman is with the Divisions of Sports Medicine and Endocrinology, Boston Children’s Hospital and the Neuroendocrine Unit, Massachusetts General Hospital; Harvard Medical School, Boston, Massachusetts. Blauwet is with the Department of Physical Medicine and Rehabilitation, Harvard Medical School, Spaulding Rehabilitation Hospital/Brigham and Women’s Hospital, Boston, Massachusetts. Constantini is with the Heidi Rothberg Sport Medicine Center, Shaare Zedek Medical Center, Hebrew University, Jerusalem, Israel. Lebrun is with the Department of Family Medicine, Faculty of Medicine & Dentistry, and Glen Sather Sports Medicine Clinic, University of Alberta, Edmonton, Alberta, Canada. Melin is with the Department of Nutrition, Exercise and Sport, University of Copenhagen, Frederiksberg, Denmark. Meyer is with the Health Sciences Department, University of Colorado, Colorado Springs, Colorado. Sherman is a counselor in Bloomington, Indiana. Tenforde is with the Department of Physical Medicine and Rehabilitation, Harvard Medical School, Spaulding Rehabilitation Hospital, Charlestown, Massachusetts. Klungland Torstveit is with the Faculty of Health and Sport Sciences, University of Agder, Kristiansand, Norway. Budgett is with the IOC Medical and Scientific Department, Lausanne, Switzerland. Address author correspondence to Margo Mountjoy at mmsportdoc@mcmaster.ca. Margo Mountjoy McMaster University", "title": "" }, { "docid": "65557b1b1e43e4f98f8edea6869d35b3", "text": "Several new genomics technologies have become available that offer long-read sequencing or long-range mapping with higher throughput and higher resolution analysis than ever before. These long-range technologies are rapidly advancing the field with improved reference genomes, more comprehensive variant identification and more complete views of transcriptomes and epigenomes. However, they also require new bioinformatics approaches to take full advantage of their unique characteristics while overcoming their complex errors and modalities. Here, we discuss several of the most important applications of the new technologies, focusing on both the currently available bioinformatics tools and opportunities for future research. Various genomics-related fields are increasingly taking advantage of long-read sequencing and long-range mapping technologies, but making sense of the data requires new analysis strategies. This Review discusses bioinformatics tools that have been devised to handle the numerous characteristic features of these long-range data types, with applications in genome assembly, genetic variant detection, haplotype phasing, transcriptomics and epigenomics.", "title": "" }, { "docid": "4d10a95ee1cd2e1a078d4f43bc05e75b", "text": "Many storage security breaches have recently been reported in the mass media as the direct result of new breach disclosure state laws across the United States (unfortunately, not internationally). In this paper, we provide an empirical analysis of disclosed storage security breaches for the period of 2005-2006. By processing raw data from the best available sources, we seek to understand the what, who, how, where, and when questions about storage security breaches so that others can build upon this evidence when developing best practices for preventing and mitigating storage breaches. While some policy formulation has already started in reaction to media reports (many without empirical analysis), this work provides initial empirical analysis upon which future empirical analysis and future policy decisions can be based.", "title": "" }, { "docid": "ba8cddc6ed18f941ed7409524137c28c", "text": "This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent’s past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.", "title": "" }, { "docid": "3b2aa97c0232857dffa971d9c040d430", "text": "This paper provides a critical analysis of Mobile Learning projects published before the end of 2007. The review uses a Mobile Learning framework to evaluate and categorize 102 Mobile Learning projects, and to briefly introduce exemplary projects for each category. All projects were analysed with the criteria: context, tools, control, communication, subject and objective. Although a significant number of projects have ventured to incorporate the physical context into the learning experience, few projects include a socializing context. Tool support ranges from pure content delivery to content construction by the learners. Although few projects explicitly discuss the Mobile Learning control issues, one can find all approaches from pure teacher control to learner control. Despite the fact that mobile phones initially started as a communication device, communication and collaboration play a surprisingly small role in Mobile Learning projects. Most Mobile Learning projects support novices, although one might argue that the largest potential is supporting advanced learners. All results show the design space and reveal gaps in Mobile Learning research.", "title": "" }, { "docid": "084ceedc5a45b427503f776a5c9fea68", "text": "Although the worldwide incidence of infant botulism is rare, the majority of cases are diagnosed in the United States. An infant can acquire botulism by ingesting Clostridium botulinum spores, which are found in soil or honey products. The spores germinate into bacteria that colonize the bowel and synthesize toxin. As the toxin is absorbed, it irreversibly binds to acetylcholine receptors on motor nerve terminals at neuromuscular junctions. The infant with botulism becomes progressively weak, hypotonic and hyporeflexic, showing bulbar and spinal nerve abnormalities. Presenting symptoms include constipation, lethargy, a weak cry, poor feeding and dehydration. A high index of suspicion is important for the diagnosis and prompt treatment of infant botulism, because this disease can quickly progress to respiratory failure. Diagnosis is confirmed by isolating the organism or toxin in the stool and finding a classic electromyogram pattern. Treatment consists of nutritional and respiratory support until new motor endplates are regenerated, which results in spontaneous recovery. Neurologic sequelae are seldom seen. Some children require outpatient tube feeding and may have persistent hypotonia.", "title": "" }, { "docid": "a0b8475e0f50bc603d2280c4dcea8c0f", "text": "We provide data on the extent to which computer-related audit procedures are used and whether two factors, control risk assessment and audit firm size, influence computer-related audit procedures use. We used a field-based questionnaire to collect data from 181 auditors representing Big 4, national, regional, and local firms. Results indicate that computer-related audit procedures are generally used when obtaining an understanding of the client system and business processes and testing computer controls. Furthermore, 42.9 percent of participants indicate that they relied on internal controls; however, this percentage increases significantly for auditors at Big 4 firms. Finally, our results raise questions for future research regarding computer-related audit procedure use.", "title": "" }, { "docid": "13d8bfe718d5346b886c8e6bdac9abab", "text": "In the Story Cloze Test, a system is presented with a 4-sentence prompt to a story, and must determine which one of two potential endings is the ‘right’ ending to the story. Previous work has shown that ignoring the training set and training a model on the validation set can achieve high accuracy on this task due to stylistic differences between the story endings in the training set and validation and test sets. Following this approach, we present a simpler fully-neural approach to the Story Cloze Test using skip-thought embeddings of the stories in a feed-forward network that achieves close to state-of-the-art performance on this task without any feature engineering. We also find that considering just the last sentence of the prompt instead of the whole prompt yields higher accuracy with our approach.", "title": "" }, { "docid": "1451c145b1ed5586755a2c89517a582f", "text": "A robust automatic micro-expression recognition system would have broad applications in national safety, police interrogation, and clinical diagnosis. Developing such a system requires high quality databases with sufficient training samples which are currently not available. We reviewed the previously developed micro-expression databases and built an improved one (CASME II), with higher temporal resolution (200 fps) and spatial resolution (about 280×340 pixels on facial area). We elicited participants' facial expressions in a well-controlled laboratory environment and proper illumination (such as removing light flickering). Among nearly 3000 facial movements, 247 micro-expressions were selected for the database with action units (AUs) and emotions labeled. For baseline evaluation, LBP-TOP and SVM were employed respectively for feature extraction and classifier with the leave-one-subject-out cross-validation method. The best performance is 63.41% for 5-class classification.", "title": "" }, { "docid": "91f232a7cee24a898c9c2cf6d9938b55", "text": "In this letter, a 4 ×4 substrate integrated waveguide (SIW)-fed circularly polarized (CP) antenna array with a broad axial-ratio (AR) bandwidth is designed and fabricated by multilayer printed circuit board (PCB) technology. The antenna array consists of 16 sequentially rotated elliptical cavities fed by slots on the SIW acting as the radiating elements, four 1-to-4 SIW power dividers, and a transition from a coaxial cable to the SIW. The widened AR bandwidth of the antenna array is achieved by using an improved SIW power divider. The antenna prototype was fabricated and measured, and the discrepancies between simulations and measurements are carefully analyzed.", "title": "" }, { "docid": "44aa302a4fcb1793666b6aedc9aa5798", "text": "Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain's core algorithms.", "title": "" }, { "docid": "cccecb08c92f8bcec4a359373a20afcb", "text": "To solve the problem of the false matching and low robustness in detecting copy-move forgeries, a new method was proposed in this study. It involves the following steps: first, establish a Gaussian scale space; second, extract the orientated FAST key points and the ORB features in each scale space; thirdly, revert the coordinates of the orientated FAST key points to the original image and match the ORB features between every two different key points using the hamming distance; finally, remove the false matched key points using the RANSAC algorithm and then detect the resulting copy-move regions. The experimental results indicate that the new algorithm is effective for geometric transformation, such as scaling and rotation, and exhibits high robustness even when an image is distorted by Gaussian blur, Gaussian white noise and JPEG recompression; the new algorithm even has great detection on the type of hiding object forgery.", "title": "" }, { "docid": "b7b2f1c59dfc00ab6776c6178aff929c", "text": "Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers. We suggest that the prospects for the future integration of technological infrastructures and research ecosystems need to be considered at three different levels. First, we discuss the convergence of research applications and workflows that establish a research paradigm that combines both HPC and HDA, where ongoing progress is already motivating efforts at the other two levels. Second, we offer an account of some of the problems involved with creating a converged infrastructure for peripheral environments, that is, a shared infrastructure that can be deployed throughout the network in a scalable manner to meet the highly diverse requirements for processing, communication, and buffering/storage of massive data workflows of many different scientific domains. Third, we focus on some opportunities for software ecosystem convergence in big, logically centralized facilities that execute large-scale simulations and models and/or perform large-scale data analytics. We close by offering some conclusions and recommendations for future investment and policy review.", "title": "" }, { "docid": "8f4ce2d2ec650a3923d27c3188f30f38", "text": "Synthetic aperture radar (SAR) interferometry is a modern efficient technique that allows reconstructing the height profile of the observed scene. However, apart for the presence of critical nonlinear inversion steps, particularly crucial in abrupt topography scenarios, it does not allow one to separate different scattering mechanisms in the elevation (height) direction within the ground pixel. Overlay of scattering at different elevations in the same azimuth-range resolution cell can be due either to the penetration of the radiation below the surface or to perspective ambiguities caused by the side-looking geometry. Multibaseline three-dimensional (3-D) SAR focusing allows overcoming such a limitation and has thus raised great interest in the recent research. First results with real data have been only obtained in the laboratory and with airborne systems, or with limited time-span and spatial-coverage spaceborne data. This work presents a novel approach for the tomographic processing of European Remote Sensing satellite (ERS) real data for extended scenes and long time span. Besides facing problems common to the airborne case, such as the nonuniformly spaced passes, this processing requires tackling additional difficulties specific to the spaceborne case, in particular a space-varying phase calibration of the data due to atmospheric variations and possible scene deformations occurring for years-long temporal spans. First results are presented that confirm the capability of ERS multipass tomography to resolve multiple targets within the same azimuth-range cell and to map the 3-D scattering properties of the illuminated scene.", "title": "" }, { "docid": "e38ab3500bbef918801d5c4b1a07ef6c", "text": "A small and compact triple-band microstrip-fed printed monopole antenna for Wireless Local Area Network (WLAN) and Worldwide Interoperability for Microwave Access (WiMAX) is presented. The proposed antenna consists of a rectangular radiating patch with L- and U-shaped slots and ground plane. A parametric study on the lengths of the U- and L-shaped slots of the proposed antenna is provided to obtain the required operational frequency bands-namely, WLAN (2.4/5.2/5.8 GHz) and WiMAX (2.5/3.5/5.5 GHz). The proposed antenna is small (15 × 15 × 1.6 mm 3) when compared to previously well-known double- and triple-band monopole antennas. The simulation and measurement results show that the designed antenna is capable of operating over the 2.25-2.85, 3.4-4.15, and 4.45-8 GHz frequency bands while rejecting frequency ranges between these three bands. Omnidirectional radiation pattern and acceptable antenna gain are achieved over the operating bands.", "title": "" }, { "docid": "c6e001e6e4964553f9087094e221cb4c", "text": "Brain cells normally respond adaptively to bioenergetic challenges resulting from ongoing activity in neuronal circuits, and from environmental energetic stressors such as food deprivation and physical exertion. At the cellular level, such adaptive responses include the “strengthening” of existing synapses, the formation of new synapses, and the production of new neurons from stem cells. At the molecular level, bioenergetic challenges result in the activation of transcription factors that induce the expression of proteins that bolster the resistance of neurons to the kinds of metabolic, oxidative, excitotoxic, and proteotoxic stresses involved in the pathogenesis of brain disorders including stroke, and Alzheimer’s and Parkinson’s diseases. Emerging findings suggest that lifestyles that include intermittent bioenergetic challenges, most notably exercise and dietary energy restriction, can increase the likelihood that the brain will function optimally and in the absence of disease throughout life. Here, we provide an overview of cellular and molecular mechanisms that regulate brain energy metabolism, how such mechanisms are altered during aging and in neurodegenerative disorders, and the potential applications to brain health and disease of interventions that engage pathways involved in neuronal adaptations to metabolic stress.", "title": "" }, { "docid": "e00ba988f473d4729f2e593171e15185", "text": "To achieve more effective solution for large-scale image classification (i.e., classifying millions of images into thousands or even tens of thousands of object classes or categories), a deep multi-task learning algorithm is developed by seamlessly integrating deep CNNs with multi-task learning over the concept ontology, where the concept ontology is used to organize large numbers of object classes or categories hierarchically and determine the inter-related learning tasks automatically. Our deep multi-task learning algorithm can integrate the deep CNNs to learn more discriminative high-level features for image representation, and it can also leverage multi-task learning and inter-level relationship constraint to train more discriminative tree classifier over the concept ontology and control the inter-level error propagation effectively. In our deep multi-task learning algorithm, we can use back propagation to simultaneously refine both the relevant node classifiers (at different levels of the concept ontology) and the deep CNNs according to a joint objective function. The experimental results have demonstrated that our deep multi-task learning algorithm can achieve very competitive results on both the accuracy and the cost of feature extraction for large-scale image classification.", "title": "" }, { "docid": "36056ae83d1c0b59b8f78d3d68099a3b", "text": "A compact, single-feed, broadband circularly polarized patch antenna is proposed in this letter. The antenna comprises an H-shaped microstrip patch printed over a metamaterial-inspired reactive impedance surface (RIS). The RIS structure comprising a lattice of 4 × 4 periodic metallic square patches helps to increase the bandwidth of the antenna. The final optimized structure exhibits an impedance bandwidth of 44.5% (4.64–7.3 GHz) alongwith a 3 dB axial-ratio bandwidth of 27.5% (4.55–6 GHz). Moreover, the proposed antenna yields a good broadside gain of 7.2 dBi at 5.5 GHz. The radiation efficiency of the present structure is better than 77% for the entire band of operation.", "title": "" }, { "docid": "33906623c1ac445e18a30805d2a122cf", "text": "Diagnostic problems abound for individuals, organizations, and society. The stakes are high, often life and death. Such problems are prominent in the fields of health care, public safety, business, environment, justice, education, manufacturing, information processing, the military, and government. Particular diagnostic questions are raised repetitively, each time calling for a positive or negative decision about the presence of a given condition or the occurrence (often in the future) of a given event. Consider the following illustrations: Is a cancer present? Will this individual commit violence? Are there explosives in this luggage? Is this aircraft fit to fly? Will the stock market advance today? Is this assembly-line item flawed? Will an impending storm strike? Is there oil in the ground here? Is there an unsafe radiation level in my house? Is this person lying? Is this person using drugs? Will this applicant succeed? Will this book have the information I need? Is that plane intending to attack this ship? Is this applicant legally disabled? Does this tax return justify an audit? Each time such a question is raised, the available evidence is assessed by a person or a device or a combination of the two, and a choice is then made between the two alternatives, yes or no. The evidence may be a x-ray, a score on a psychiatric test, a chemical analysis, and so on. In considering just yes–no alternatives, such diagnoses do not exhaust the types of diagnostic questions that exist. Other questions, for example, a differential diagnosis in medicine, may require considering a half dozen or more possible alternatives. Decisions of the yes–no type, however, are prevalent and important, as the foregoing examples suggest, and they are the focus of our analysis. We suggest that diagnoses of this type rest on a general process with common characteristics across fields, and that the process warrants scientific analysis as a discipline in its own right (Swets, 1988, 1992). The main purpose of this article is to describe two ways, one obvious and one less obvious, in which diagnostic performance can be improved. The more obvious way to improve diagnosis is to improve its accuracy, that is, its ability to distinguish between the two diagnostic alternatives and to select the correct one. The less obvious way to improve diagnosis is to increase the utility of the diagnostic decisions that are made. That is, apart from improving accuracy, there is a need to produce decisions that are in tune both with the situational probabilities of the alternative diagnostic conditions and with the benefits and costs, respectively, of correct and incorrect decisions. Methods exist to achieve both goals. These methods depend on a measurement technique that separately and independently quantifies the two aspects of diagnostic performance, namely, its accuracy and the balance it provides among the various possible types of decision outcomes. We propose that together the method for measuring diagnostic performance and the methods for improving it constitute the fundamentals of a science of diagnosis. We develop the idea that this incipient discipline has been demonstrated to improve diagnosis in several fields, but is nonetheless virtually unknown and unused in others. We consider some possible reasons for the disparity between the general usefulness of the methods and their lack of general use, and we advance some ideas for reducing this disparity. To anticipate, we develop two successful examples of these methods in some detail: the prognosis of violent behavior and the diagnosis of breast and prostate cancer. We treat briefly other successful examples, such as weather forecasting and admission to a selective school. We also develop in detail two examples of fields that would markedly benefit from application of the methods, namely the detection of cracks in airplane wings and the detection of the virus of AIDS. Briefly treated are diagnoses of dangerous conditions for in-flight aircraft and of behavioral impairments that qualify as disabilities in individuals.", "title": "" }, { "docid": "99ffaa3f845db7b71a6d1cbc62894861", "text": "There is a huge amount of historical documents in libraries and in various National Archives that have not been exploited electronically. Although automatic reading of complete pages remains, in most cases, a long-term objective, tasks such as word spotting, text/image alignment, authentication and extraction of specific fields are in use today. For all these tasks, a major step is document segmentation into text lines. Because of the low quality and the complexity of these documents (background noise, artifacts due to aging, interfering lines), automatic text line segmentation remains an open research field. The objective of this paper is to present a survey of existing methods, developed during the last decade and dedicated to documents of historical interest.", "title": "" } ]
scidocsrr
4a3ff88c9d0f4c9f47ec7a6d99c65d47
A survey on methods for colour image indexing and retrieval in image databases
[ { "docid": "26508379e41da5e3b38dd944fc9e4783", "text": "We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually-significant coefficients. We describe three Photobook tools in particular: one that allows search based on grey-level appearance, one that uses 2-D shape, and a third that allows search based on textural properties.", "title": "" } ]
[ { "docid": "9c0baef3b1d0c0f13b87a2dbeb4769f9", "text": "In a longitudinal study of 140 eighth-grade students, self-discipline measured by self-report, parent report, teacher report, and monetary choice questionnaires in the fall predicted final grades, school attendance, standardized achievement-test scores, and selection into a competitive high school program the following spring. In a replication with 164 eighth graders, a behavioral delay-of-gratification task, a questionnaire on study habits, and a group-administered IQ test were added. Self-discipline measured in the fall accounted for more than twice as much variance as IQ in final grades, high school selection, school attendance, hours spent doing homework, hours spent watching television (inversely), and the time of day students began their homework. The effect of self-discipline on final grades held even when controlling for first-marking-period grades, achievement-test scores, and measured IQ. These findings suggest a major reason for students falling short of their intellectual potential: their failure to exercise self-discipline.", "title": "" }, { "docid": "3a855c3c3329ff63037711e8d17249e3", "text": "In this work, we present an adaptation of the sequence-tosequence model for structured vision tasks. In this model, the output variables for a given input are predicted sequentially using neural networks. The prediction for each output variable depends not only on the input but also on the previously predicted output variables. The model is applied to spatial localization tasks and uses convolutional neural networks (CNNs) for processing input images and a multi-scale deconvolutional architecture for making spatial predictions at each step. We explore the impact of weight sharing with a recurrent connection matrix between consecutive predictions, and compare it to a formulation where these weights are not tied. Untied weights are particularly suited for problems with a fixed sized structure, where different classes of output are predicted at different steps. We show that chain models achieve top performing results on human pose estimation from images and videos.", "title": "" }, { "docid": "3d3f5b45b939f926d1083bab9015e548", "text": "Industry is facing an era characterised by unpredictable market changes and by a turbulent competitive environment. The key to compete in such a context is to achieve high degrees of responsiveness by means of high flexibility and rapid reconfiguration capabilities. The deployment of modular solutions seems to be part of the answer to face these challenges. Semantic modelling and ontologies may represent the needed knowledge representation to support flexibility and modularity of production systems, when designing a new system or when reconfiguring an existing one. Although numerous ontologies for production systems have been developed in the past years, they mainly focus on discrete manufacturing, while logistics aspects, such as those related to internal logistics and warehousing, have not received the same attention. The paper aims at offering a representation of logistics aspects, reflecting what has become a de-facto standard terminology in industry and among researchers in the field. Such representation is to be used as an extension to the already-existing production systems ontologies that are more focused on manufacturing processes. The paper presents the structure of the hierarchical relations within the examined internal logistics elements, namely Storage and Transporters, structuring them in a series of classes and sub-classes, suggesting also the relationships and the attributes to be considered to complete the modelling. Finally, the paper proposes an industrial example with a miniload system to show how such a modelling of internal logistics elements could be instanced in the real world. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7499f88de9d2f76008dc38e96b08ca0a", "text": "Refractory and super-refractory status epilepticus (SE) are serious illnesses with a high risk of morbidity and even fatality. In the setting of refractory generalized convulsive SE (GCSE), there is ample justification to use continuous infusions of highly sedating medications—usually midazolam, pentobarbital, or propofol. Each of these medications has advantages and disadvantages, and the particulars of their use remain controversial. Continuous EEG monitoring is crucial in guiding the management of these critically ill patients: in diagnosis, in detecting relapse, and in adjusting medications. Forms of SE other than GCSE (and its continuation in a “subtle” or nonconvulsive form) should usually be treated far less aggressively, often with nonsedating anti-seizure drugs (ASDs). Management of “non-classic” NCSE in ICUs is very complicated and controversial, and some cases may require aggressive treatment. One of the largest problems in refractory SE (RSE) treatment is withdrawing coma-inducing drugs, as the prolonged ICU courses they prompt often lead to additional complications. In drug withdrawal after control of convulsive SE, nonsedating ASDs can assist; medical management is crucial; and some brief seizures may have to be tolerated. For the most refractory of cases, immunotherapy, ketamine, ketogenic diet, and focal surgery are among several newer or less standard treatments that can be considered. The morbidity and mortality of RSE is substantial, but many patients survive and even return to normal function, so RSE should be treated promptly and as aggressively as the individual patient and type of SE indicate.", "title": "" }, { "docid": "92b8206a1a5db0be7df28ed2e645aafc", "text": "Depthwise separable convolutions reduce the number of parameters and computation used in convolutional operations while increasing representational efficiency. They have been shown to be successful in image classification models, both in obtaining better models than previously possible for a given parameter count (the Xception architecture) and considerably reducing the number of parameters required to perform at a given level (the MobileNets family of architectures). Recently, convolutional sequence-to-sequence networks have been applied to machine translation tasks with good results. In this work, we study how depthwise separable convolutions can be applied to neural machine translation. We introduce a new architecture inspired by Xception and ByteNet, called SliceNet, which enables a significant reduction of the parameter count and amount of computation needed to obtain results like ByteNet, and, with a similar parameter count, achieves better results. In addition to showing that depthwise separable convolutions perform well for machine translation, we investigate the architectural changes that they enable: we observe that thanks to depthwise separability, we can increase the length of convolution windows, removing the need for filter dilation. We also introduce a new \"super-separable\" convolution operation that further reduces the number of parameters and computational cost of the models.", "title": "" }, { "docid": "05722ec4947154aea328e470a6872a64", "text": "This article focuses on studying the effects of muscle and fat percentages on the exergy behavior of the human body under several environmental conditions. The main objective is to relate the thermal comfort indicators with exergy rates, resulting in a Second Law perspective to evaluate thermal environment. A phenomenological model is proposed of the human body with four layers: core, muscle, fat and skin. The choice of a simplified model is justified by the facility to variate the amount of mass in each tissue without knowing how it spreads around the body. After validated, the model was subjected to a set of environmental conditions and body compositions. The results obtained indicate that the area normalization (Watts per square meter) may be used as a safe generalization for the exergy transfer to environment. Moreover, the destroyed exergy itself is sufficient to evaluate the thermal sensation when the model is submitted to environmental temperatures lower than that considered for the thermal neutrality condition (and, in this text, the thermal comfort) . Nevertheless, for environments with temperatures higher than the calculated for the thermal neutrality, the combination of destroyed exergy and the rate of exergy transferred to the environment should be used to properly evaluate thermal comfort.", "title": "" }, { "docid": "14739a86487a26452bd73da11264b9e4", "text": "This paper presents a systematic online prediction method (Social-Forecast) that is capable to accurately forecast the popularity of videos promoted by social media. Social-Forecast explicitly considers the dynamically changing and evolving propagation patterns of videos in social media when making popularity forecasts, thereby being situation and context aware. Social-Forecast aims to maximize the forecast reward, which is defined as a tradeoff between the popularity prediction accuracy and the timeliness with which a prediction is issued. The forecasting is performed online and requires no training phase or a priori knowledge. We analytically bound the prediction performance loss of Social-Forecast as compared to that obtained by an omniscient oracle and prove that the bound is sublinear in the number of video arrivals, thereby guaranteeing its short-term performance as well as its asymptotic convergence to the optimal performance. In addition, we conduct extensive experiments using real-world data traces collected from the videos shared in RenRen, one of the largest online social networks in China. These experiments show that our proposed method outperforms existing view-based approaches for popularity prediction (which are not context-aware) by more than 30% in terms of prediction rewards.", "title": "" }, { "docid": "020781cec754310dac5b281d7f84bbf5", "text": "Quantitative data cleaning relies on the use of statistical methods to identify and repair data quality problems while logical data cleaning tackles the same problems using various forms of logical reasoning over declarative dependencies. Each of these approaches has its strengths: the logical approach is able to capture subtle data quality problems using sophisticated dependencies, while the quantitative approach excels at ensuring that the repaired data has desired statistical properties. We propose a novel framework within which these two approaches can be used synergistically to combine their respective strengths. We instantiate our framework using (i) metric functional dependencies (metric FDs), a type of dependency that generalizes the commonly used FDs to identify inconsistencies in domains where only large differences in metric data are considered to be a data quality problem, and (ii) repairs that modify the inconsistent data so as to minimize statistical distortion, measured using the Earth Mover’s Distance (EMD). We show that the problem of computing a statistical distortion minimal repair is NP-hard. Given this complexity, we present an efficient algorithm for finding a minimal repair that has a small statistical distortion using EMD computation over semantically related attributes. To identify semantically related attributes, we present a sound and complete axiomatization and an efficient algorithm for testing implication of metric FDs. While the complexity of inference for some other FD extensions is co-NP complete, we show that the inference problem for metric FDs remains linear, as in traditional FDs. We prove that every instance that can be generated by our repair algorithm is set minimal (with no redundant changes). Our experimental evaluation demonstrates that our techniques obtain a considerably lower statistical distortion than existing repair techniques, while achieving similar levels of efficiency. ∗Supported by NSERC BIN (and Szlichta by MITACS).", "title": "" }, { "docid": "ec1120018899c6c9fe16240b8e35efac", "text": "Redundant collagen deposition at sites of healing dermal wounds results in hypertrophic scars. Adipose-derived stem cells (ADSCs) exhibit promise in a variety of anti-fibrosis applications by attenuating collagen deposition. The objective of this study was to explore the influence of an intralesional injection of ADSCs on hypertrophic scar formation by using an established rabbit ear model. Twelve New Zealand albino rabbits were equally divided into three groups, and six identical punch defects were made on each ear. On postoperative day 14 when all wounds were completely re-epithelialized, the first group received an intralesional injection of ADSCs on their right ears and Dulbecco’s modified Eagle’s medium (DMEM) on their left ears as an internal control. Rabbits in the second group were injected with conditioned medium of the ADSCs (ADSCs-CM) on their right ears and DMEM on their left ears as an internal control. Right ears of the third group remained untreated, and left ears received DMEM. We quantified scar hypertrophy by measuring the scar elevation index (SEI) on postoperative days 14, 21, 28, and 35 with ultrasonography. Wounds were harvested 35 days later for histomorphometric and gene expression analysis. Intralesional injections of ADSCs or ADSCs-CM both led to scars with a far more normal appearance and significantly decreased SEI (44.04 % and 32.48 %, respectively, both P <0.01) in the rabbit ears compared with their internal controls. Furthermore, we confirmed that collagen was organized more regularly and that there was a decreased expression of alpha-smooth muscle actin (α-SMA) and collagen type Ι in the ADSC- and ADSCs-CM-injected scars according to histomorphometric and real-time quantitative polymerase chain reaction analysis. There was no difference between DMEM-injected and untreated scars. An intralesional injection of ADSCs reduces the formation of rabbit ear hypertrophic scars by decreasing the α-SMA and collagen type Ι gene expression and ameliorating collagen deposition and this may result in an effective and innovative anti-scarring therapy.", "title": "" }, { "docid": "9634e701750984a457189611885b7c81", "text": "A practical text suitable for an introductory or advanced course in formal methods, this book presents a mathematical approach to modeling and designing systems using an extension of the B formalism: Event-B. Based on the idea of refinement, the author’s systematic approach allows the user to construct models gradually and to facilitate a systematic reasoning method by means of proofs. Readers will learn how to build models of programs and, more generally, discrete systems, but this is all done with practice in mind. The numerous examples provided arise from various sources of computer system developments, including sequential programs, concurrent programs, and electronic circuits. The book also contains a large number of exercises and projects ranging in difficulty. Each of the examples included in the book has been proved using the Rodin Platform tool set, which is available free for download at www.event-b.org.", "title": "" }, { "docid": "ddc3241c09a33bde1346623cf74e6866", "text": "This paper presents a new technique for predicting wind speed and direction. This technique is based on using a linear time-series-based model relating the predicted interval to its corresponding one- and two-year old data. The accuracy of the model for predicting wind speeds and directions up to 24 h ahead have been investigated using two sets of data recorded during winter and summer season at Madison weather station. Generated results are compared with their corresponding values when using the persistent model. The presented results validate the effectiveness and accuracy of the proposed prediction model for wind speed and direction.", "title": "" }, { "docid": "6ca24f95127bc4b74cecedd24cf41bb5", "text": "We introduce CCmutator, a mutation generation tool for multithreaded C/C++ programs written using POSIX threads and the recently standardized C++11 concurrency constructs. CCmutator is capable of performing partial mutations and generating higher order mutants, which allow for more focused and complex combinations of elementary mutation operators leading to higher quality mutants. We have implemented CCmutator based on the popular Clang/LLVM compiler framework, which allows CCmutator to be extremely scalable and robust in handling real-world C/C++ applications. CCmutator is also designed in such a way that all mutants of the same order can be generated in parallel, which allows the tool to be easily parallelized on commodity multicore hardware to improve performance.", "title": "" }, { "docid": "314de8eab45f2d3ff392db8d39a9e5f0", "text": "Binary local descriptors are widely used in computer vision thanks to their compactness and robustness to many image transformations such as rotations or scale changes. However, more complex transformations, like changes in camera viewpoint, are difficult to deal with using conventional features due to the lack of geometric information about the scene. In this paper, we propose a local binary descriptor which assumes that geometric information is available as a depth map. It employs a local parametrization of the scene surface, obtained through depth information, which is used to build a BRISK-like sampling pattern intrinsic to the scene surface. Although we illustrate the proposed method using the BRISK architecture, the obtained parametrization is rather general and could be embedded into other binary descriptors. Our simulations on a set of synthetically generated scenes show that the proposed descriptor is significantly more stable and distinctive than popular BRISK descriptors under a wide range of viewpoint angle changes.", "title": "" }, { "docid": "9332c32039cf782d19367a9515768e42", "text": "Maternal drug use during pregnancy is associated with fetal passive addiction and neonatal withdrawal syndrome. Cigarette smoking—highly prevalent during pregnancy—is associated with addiction and withdrawal syndrome in adults. We conducted a prospective, two-group parallel study on 17 consecutive newborns of heavy-smoking mothers and 16 newborns of nonsmoking, unexposed mothers (controls). Neurologic examinations were repeated at days 1, 2, and 5. Finnegan withdrawal score was assessed every 3 h during their first 4 d. Newborns of smoking mothers had significant levels of cotinine in the cord blood (85.8 ± 3.4 ng/mL), whereas none of the controls had detectable levels. Similar findings were observed with urinary cotinine concentrations in the newborns (483.1 ± 2.5 μg/g creatinine versus 43.6 ± 1.5 μg/g creatinine; p = 0.0001). Neurologic scores were significantly lower in newborns of smokers than in control infants at days 1 (22.3 ± 2.3 versus 26.5 ± 1.1; p = 0.0001), 2 (22.4 ± 3.3 versus 26.3 ± 1.6; p = 0.0002), and 5 (24.3 ± 2.1 versus 26.5 ± 1.5; p = 0.002). Neurologic scores improved significantly from day 1 to 5 in newborns of smokers (p = 0.05), reaching values closer to control infants. Withdrawal scores were higher in newborns of smokers than in control infants at days 1 (4.5 ± 1.1 versus 3.2 ± 1.4; p = 0.05), 2 (4.7 ± 1.7 versus 3.1 ± 1.1; p = 0.002), and 4 (4.7 ± 2.1 versus 2.9 ± 1.4; p = 0.007). Significant correlations were observed between markers of nicotine exposure and neurologic-and withdrawal scores. We conclude that withdrawal symptoms occur in newborns exposed to heavy maternal smoking during pregnancy.", "title": "" }, { "docid": "491bf7103b8540748b58465ff9238fe7", "text": "We present a new approach for defining groups of populations that are geographically homogeneous and maximally differentiated from each other. As a by-product, it also leads to the identification of genetic barriers between these groups. The method is based on a simulated annealing procedure that aims to maximize the proportion of total genetic variance due to differences between groups of populations (spatial analysis of molecular variance; samova). Monte Carlo simulations were used to study the performance of our approach and, for comparison, the behaviour of the Monmonier algorithm, a procedure commonly used to identify zones of sharp genetic changes in a geographical area. Simulations showed that the samova algorithm indeed finds maximally differentiated groups, which do not always correspond to the simulated group structure in the presence of isolation by distance, especially when data from a single locus are available. In this case, the Monmonier algorithm seems slightly better at finding predefined genetic barriers, but can often lead to the definition of groups of populations not differentiated genetically. The samova algorithm was then applied to a set of European roe deer populations examined for their mitochondrial DNA (mtDNA) HVRI diversity. The inferred genetic structure seemed to confirm the hypothesis that some Italian populations were recently reintroduced from a Balkanic stock, as well as the differentiation of groups of populations possibly due to the postglacial recolonization of Europe or the action of a specific barrier to gene flow.", "title": "" }, { "docid": "d98f60a2a0453954543da840076e388a", "text": "The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific language to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization performance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.", "title": "" }, { "docid": "fdda6921118e1f5a5d71a0365a6148d9", "text": "This study introduces the development of a Web-based assessment system, the Web-based Assessment and Test Analyses (WATA) system, and examines its impacts on teacher education. The WATA system is a follow-on system, which applies the Triple-A Model (assembling, administering, and appraising). Its functions include (1) an engine for teachers to administer and manage testing, (2) an engine for students to apply tests, and (3) an engine for generating test results and analyses for teachers. Two studies were undertaken to assess the usefulness and potential benefits of the WATA system for teacher education. In the first study, 47 in-service teachers were asked to assess the functions of the WATA system. The results indicated that they were satisfied with the Triple-A Model of the WATA system. In the second study, 30 pre-service teachers were required to use the WATA system during the teacher-training program. After 4 months of experience in using the WATA system, the preservice teachers’ perspectives of assessment have been changed significantly. The findings of these two studies might provide some guidance to help those who are interested in the development of Web-based assessment and intend to infuse information technology into teacher education.", "title": "" }, { "docid": "30999531d4c065b28ec98adfcf0ff6a5", "text": "Recently, nonlinear programming solvers have been used to solve a range of mathematical programs with equilibrium constraints (MPECs). In particular, sequential quadratic programming (SQP) methods have been very successful. This paper examines the local convergence properties of SQP methods applied to MPECs. SQP is shown to converge superlinearly under reasonable assumptions near a strongly stationary point. A number of examples are presented that show that some of the assumptions are difficult to relax.", "title": "" }, { "docid": "e8b486ce556a0193148ffd743661bce9", "text": "This chapter presents the fundamentals and applications of the State Machine Replication (SMR) technique for implementing consistent fault-tolerant services. Our focus here is threefold. First we present some fundamentals about distributed computing and three “practical” SMR protocols for different fault models. Second, we discuss some recent work aiming to improve the performance, modularity and robustness of SMR protocols. Finally, we present some prominent applications for SMR and an example of the real code needed for implementing a dependable service using the BFT-SMART replication library.", "title": "" } ]
scidocsrr
df17357725db1bfaf76fc0f01dc09ed9
Computational challenges for sentiment analysis in life sciences
[ { "docid": "42613c6a08ce7d86f81ec51255a1071d", "text": "Happiness and other emotions spread between people in direct contact, but it is unclear whether massive online social networks also contribute to this spread. Here, we elaborate a novel method for measuring the contagion of emotional expression. With data from millions of Facebook users, we show that rainfall directly influences the emotional content of their status messages, and it also affects the status messages of friends in other cities who are not experiencing rainfall. For every one person affected directly, rainfall alters the emotional expression of about one to two other people, suggesting that online social networks may magnify the intensity of global emotional synchrony.", "title": "" }, { "docid": "5ff263cf4a73c202741c46d5582a960a", "text": "Sentiment analysis; Sentiment classification; Feature selection; Emotion detection; Transfer learning; Building resources Abstract Sentiment Analysis (SA) is an ongoing field of research in text mining field. SA is the computational treatment of opinions, sentiments and subjectivity of text. This survey paper tackles a comprehensive overview of the last update in this field. Many recently proposed algorithms’ enhancements and various SA applications are investigated and presented briefly in this survey. These articles are categorized according to their contributions in the various SA techniques. The related fields to SA (transfer learning, emotion detection, and building resources) that attracted researchers recently are discussed. The main target of this survey is to give nearly full image of SA techniques and the related fields with brief details. The main contributions of this paper include the sophisticated categorizations of a large number of recent articles and the illustration of the recent trend of research in the sentiment analysis and its related areas. 2014 Production and hosting by Elsevier B.V. on behalf of Ain Shams University.", "title": "" }, { "docid": "a51803d5c0753f64f5216d2cc225d172", "text": "Twitter is a free social networking and micro-blogging service that enables its millions of users to send and read each other's \"tweets,\" or short, 140-character messages. The service has more than 190 million registered users and processes about 55 million tweets per day. Useful information about news and geopolitical events lies embedded in the Twitter stream, which embodies, in the aggregate, Twitter users' perspectives and reactions to current events. By virtue of sheer volume, content embedded in the Twitter stream may be useful for tracking or even forecasting behavior if it can be extracted in an efficient manner. In this study, we examine the use of information embedded in the Twitter stream to (1) track rapidly-evolving public sentiment with respect to H1N1 or swine flu, and (2) track and measure actual disease activity. We also show that Twitter can be used as a measure of public interest or concern about health-related events. Our results show that estimates of influenza-like illness derived from Twitter chatter accurately track reported disease levels.", "title": "" } ]
[ { "docid": "79caff0b1495900b5c8f913562d3e84d", "text": "We propose a formal model of web security based on an abstraction of the web platform and use this model to analyze the security of several sample web mechanisms and applications. We identify three distinct threat models that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network and/or leverage sites designed to display user-supplied content. We propose two broadly applicable security goals and study five security mechanisms. In our case studies, which include HTML5 forms, Referer validation, and a single sign-on solution, we use a SAT-based model-checking tool to find two previously known vulnerabilities and three new vulnerabilities. Our case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead.", "title": "" }, { "docid": "49a538fc40d611fceddd589b0c9cb433", "text": "Both intuition and creativity are associated with knowledge creation, yet a clear link between them has not been adequately established. First, the available empirical evidence for an underlying relationship between intuition and creativity is sparse in nature. Further, this evidence is arguable as the concepts are diversely operationalized and the measures adopted are often not validated sufficiently. Combined, these issues make the findings from various studies examining the link between intuition and creativity difficult to replicate. Nevertheless, the role of intuition in creativity should not be neglected as it is often reported to be a core component of the idea generation process, which in conjunction with idea evaluation are crucial phases of creative cognition. We review the prior research findings in respect of idea generation and idea evaluation from the view that intuition can be construed as the gradual accumulation of cues to coherence. Thus, we summarize the literature on what role intuitive processes play in the main stages of the creative problem-solving process and outline a conceptual framework of the interaction between intuition and creativity. Finally, we discuss the main challenges of measuring intuition as well as possible directions for future research.", "title": "" }, { "docid": "3265677221270162ae7eaac330f64664", "text": "We describe LifeNet, a new common sense knowledge base that captures a first-person model of human experience in terms of a propositional representation. LifeNet represents knowledge as an undirected graphical model relating 80,000 egocentric propositions with 415,000 temporal and atemporal links between these propositions. We explain how we built LifeNet by extracting its propositions and links from the Open Mind Common Sense corpus of common sense assertions, present a method for reasoning with the resulting knowledge base, evaluate the knowledge in LifeNet and the quality of inference, and describe a knowledge acquisition system that lets people interact with LifeNet to extend it further. INTRODUCTION We are interested in building ‘common sense’ models of the structure and flow of human life. Today’s computer systems lack such models—they know almost nothing about the kinds of activities people engage in, the actions we are capable of and their likely effects, the kinds of places we spend our time and the things that are found there, the types of events we enjoy and types we loathe, and so forth. By finding ways to give computers the ability to represent and reason about ordinary life, we believe they can be made more helpful participants in the human world. An adequate common sense model should include knowledge about a wide range of objects, states, events, and situations. For example, a common sense model of human life should enable the following kinds of predictions: • When someone is thirsty, it is likely that they will soon be drinking a liquid beverage. • When someone is at an airport, it is likely they possess a plane ticket. • When someone is typing at a computer, it is possible that they are composing an e-mail. • When someone is crying, it is likely that they feel sad or are in pain. • After someone wakes up, they are likely to get out of bed. Most previous efforts to encode common sense knowledge have made use of relational representations such as frames or predicate logics. However, while such representations have proven expressive enough to describe a wide range of common sense knowledge (see Davis [1] for many examples of how types of common sense knowledge can be formulated in first-order logic, or the Cyc upper level ontology [2]), it has been challenging finding methods of default reasoning that can both make use of such powerful representations and also scale to the number of assertions that are needed to encompass a reasonably broad range of common sense knowledge. In addition, as a knowledge base grows, it is increasingly likely that individual pieces of knowledge will suffer from bugs of various kinds; it seems necessary that we find methods of common sense reasoning that are tolerant to some errors and uncertainties in the knowledge base. However, in recent years there has been much progress in finding ways to reason in uncertain domains using less expressive propositional representations, for example, with Bayesian networks and other types of graphical models. Could such methods be applied to the common sense reasoning problem? Is it possible to take an approach to common sense reasoning that begins not with an ontology of predicates and individuals, but rather with a large set of propositions linked by their conditional or joint probabilities? Propositional representations are less expressive than relational ones, and so it may take a great many propositional rules to express the same constraint as a single relational rule, but such costs in expressivity often come with potential gains in tractability, and in the case of common sense domains, this trade-off seems to be rather poorly understood. The potential benefits of a proposition representation go beyond just matters of efficiency. From the perspective of knowledge acquisition, interfaces for browsing and entering propositional knowledge are potentially much easier to use because they do not require that the user learn to read and write some complex syntax. From the perspective of applying common sense reasoning within applications, propositional representations have such a simple semantics that they are likely quite easy to interface to. Thus, while propositional representations may be less expressive and require a larger ontology of propositions than relational representations for the same domain, they are in many ways easier to build, understand and use. In this paper we explore such questions by describing LifeNet, a new common sense knowledge base that captures a first-person model of human experience in terms of Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. K-CAP’03, October 23–25, 2003, Sanibel Island, Florida, USA. Copyright 2003 ACM 1-58113-583-1/03/0010...$5.00. a propositional representation. LifeNet represents knowledge as a graphical model relating 80,000 egocentric propositions with 415,000 temporal and atemporal links between these propositions, e.g. • I-put-my-foot-on-the-brake-pedal → I-stop-a-car • I-pour-detergent-into-wash → I-clean-clothes • I-put-quarter-in-washing-machine → I-clean-clothes • I-am-at-a-zoo → I-see-a-monkey • I-put-on-a-seat-belt → I-drive-a-car • I-put-a-key-in-the-ignition → I-drive-a-car We explain how we built LifeNet by extracting its propositions and links from the Open Mind Common Sense corpus of common sense assertions supplied by thousands of members of the general public, present a method for reasoning with the resulting knowledge base, evaluate the knowledge in LifeNet and the quality of inference, and describe a knowledge acquisition system that lets people interact with LifeNet to extend it further. LIFENET LifeNet is a large-scale temporal graphical model expressed in terms of ‘egocentric’ propositions, e.g. propositions of the form: • I-am-at-a-restaurant • I-am-eating-a-sandwich • It-is-3-pm • It-is-raining-outside • I-feel-frightened • I-am-drinking-coffee Each of these propositions is a statement that a person could say was true or not true of their situation, perhaps with some probability. In LifeNet these propositions are arranged into two columns representing the state at two consecutive moments in time, and these propositions are linked by joint probability tables representing both the probability that one proposition follows another, and also the probability of two propositions being true at the same time. A small sample of LifeNet is shown in Figure 1 below:", "title": "" }, { "docid": "eec60b309731ef2f0adbfe94324a2ca0", "text": "Wireless sensor networks are those networks which are composed by the collection of very small devices mainly named as nodes. These nodes are integrated with small battery life which is very hard or impossible to replace or reinstate. For the sensing, gathering and processing capabilities, the usage of battery is must. Therefore, the battery life of Wireless Sensor Networks should be as large as possible in order to sense the information around it or in which the nodes are placed. The concept of hierarchical routing is mainly highlighted in this paper, in which the nodes work in a hierarchical manner by the formation of Cluster Head within a Cluster. These formed Cluster Heads then transfer the data or information in the form of packets from one cluster to another. In this work, the protocol used for the simulation is Low Energy adaptive Clustering Hierarchy which is one of the most efficient protocol. The nodes are of homogeneous in nature. The simulator used is MATLAB along with Cuckoo Search Algorithm. The Simulation results have been taken out showing the effectiveness of protocol with Cuckoo Search. Keywords— Wireless Sensor Network (WSN), Low Energy adaptive Clustering Hierarchy (LEACH), Cuckoo Search, Cluster Head (CH), Base Station (BS).", "title": "" }, { "docid": "df92fe7057593a9312de91c06e1525ca", "text": "The Formal Theory of Fun and Creativity (1990–2010) [Schmidhuber, J.: Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Trans. Auton. Mental Dev. 2(3), 230–247 (2010b)] describes principles of a curious and creative agent that never stops generating nontrivial and novel and surprising tasks and data. Two modules are needed: a data encoder and a data creator. The former encodes the growing history of sensory data as the agent is interacting with its environment; the latter executes actions shaping the history. Both learn. The encoder continually tries to encode the created data more efficiently, by discovering new regularities in it. Its learning progress is the wow-effect or fun or intrinsic reward of the creator, which maximizes future expected reward, being motivated to invent skills leading to interesting data that the encoder does not yet know but can easily learn with little computational effort. I have argued that this simple formal principle explains science and art and music and humor. Note: This overview heavily draws on previous publications since 1990, especially Schmidhuber (2010b), parts of which are reprinted with friendly permission by IEEE.", "title": "" }, { "docid": "ee20233660c2caa4a24dbfb512172277", "text": "Any projection of a 3D scene into a wide-angle image unavoidably results in distortion. Current projection methods either bend straight lines in the scene, or locally distort the shapes of scene objects. We present a method that minimizes this distortion by adapting the projection to content in the scene, such as salient scene regions and lines, in order to preserve their shape. Our optimization technique computes a spatially-varying projection that respects user-specified constraints while minimizing a set of energy terms that measure wide-angle image distortion. We demonstrate the effectiveness of our approach by showing results on a variety of wide-angle photographs, as well as comparisons to standard projections.", "title": "" }, { "docid": "3e845c9a82ef88c7a1f4447d57e35a3e", "text": "Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a “heuristic” that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.", "title": "" }, { "docid": "7c00c5d75ab4beffc595aff99a66b402", "text": "We develop a unified model, known as MgNet, that simultaneously recovers some convolutional neural networks (CNN) for image classification and multigrid (MG) methods for solving discretized partial different equations (PDEs). This model is based on close connections that we have observed and uncovered between the CNN and MG methodologies. For example, pooling operation and feature extraction in CNN correspond directly to restriction operation and iterative smoothers in MG, respectively. As the solution space is often the dual of the data space in PDEs, the analogous concept of feature space and data space (which are dual to each other) is introduced in CNN. With such connections and new concept in the unified model, the function of various convolution operations and pooling used in CNN can be better understood. As a result, modified CNN models (with fewer weights and hyper parameters) are developed that exhibit competitive and sometimes better performance in comparison with existing CNN models when applied to both CIFAR-10 and CIFAR-100 data sets.", "title": "" }, { "docid": "c72940e6154fa31f6bedca17336f8a94", "text": "Following on from ecological theories of perception, such as the one proposed by [Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin] this paper reviews the literature on the multisensory interactions underlying the perception of flavor in order to determine the extent to which it is really appropriate to consider flavor perception as a distinct perceptual system. We propose that the multisensory perception of flavor may be indicative of the fact that the taxonomy currently used to define our senses is simply not appropriate. According to the view outlined here, the act of eating allows the different qualities of foodstuffs to be combined into unified percepts; and flavor can be used as a term to describe the combination of tastes, smells, trigeminal, and tactile sensations as well as the visual and auditory cues, that we perceive when tasting food.", "title": "" }, { "docid": "e6d3b95f34640c16435b2a7a78bed25b", "text": "In this paper, a novel face dataset with attractiveness ratings, namely the SCUT-FBP dataset, is developed for automatic facial beauty perception. This dataset provides a benchmark to evaluate the performance of different methods for facial attractiveness prediction, including the state-of-the-art deep learning method. The SCUT-FBP dataset contains face portraits of 500 Asian female subjects with attractiveness ratings, all of which have been verified in terms of rating distribution, standard deviation, consistency, and self-consistency. Benchmark evaluations for facial attractiveness prediction were performed with different combinations of facial geometrical features and texture features using classical statistical learning methods and the deep learning method. The best Pearson correlation 0.8187 was achieved by the CNN model. The results of the experiments indicate that the SCUT-FBP dataset provides a reliable benchmark for facial beauty perception.", "title": "" }, { "docid": "d22390e43aa4525d810e0de7da075bbf", "text": "information, including knowledge management and e-business applications. Next-generation knowledge management systems will likely rely on conceptual models in the form of ontologies to precisely define the meaning of various symbols. For example, FRODO (a Framework for Distributed Organizational Memories) uses ontologies for knowledge description in organizational memories,1 CoMMA (Corporate Memory Management through Agents) investigates agent technologies for maintaining ontology-based knowledge management systems,2 and Steffen Staab and his colleagues have discussed the methodologies and processes for building ontology-based systems.3 Here we present an integrated enterprise-knowledge management architecture for implementing an ontology-based knowledge management system (OKMS). We focus on two critical issues related to working with ontologies in real-world enterprise applications. First, we realize that imposing a single ontology on the enterprise is difficult if not impossible. Because organizations must devise multiple ontologies and thus require integration mechanisms, we consider means for combining distributed and heterogeneous ontologies using mappings. Additionally, a system’s ontology often must reflect changes in system requirements and focus, so we developed guidelines and an approach for managing the difficult and complex ontology-evolution process.", "title": "" }, { "docid": "223a7496c24dcf121408ac3bba3ad4e5", "text": "Process control and SCADA systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. Unfortunately, new research indicates this complacency is misplaced – the move to open standards such as Ethernet, TCP/IP and web technologies is letting hackers take advantage of the control industry’s ignorance. This paper summarizes the incident information collected in the BCIT Industrial Security Incident Database (ISID), describes a number of events that directly impacted process control systems and identifies the lessons that can be learned from these security events.", "title": "" }, { "docid": "f012c0d9fe795a738b3cd82cef94ef19", "text": "Fraud detection is an industry where incremental gains in predictive accuracy can have large benefits for banks and customers. Banks adapt models to the novel ways in which “fraudsters” commit credit card fraud. They collect data and engineer new features in order to increase predictive power. This research compares the algorithmic impact on the predictive power across three supervised classification models: logistic regression, gradient boosted trees, and deep learning. This research also explores the benefits of creating features using domain expertise and feature engineering using an autoencoder—an unsupervised feature engineering method. These two methods of feature engineering combined with the direct mapping of the original variables create six different feature sets. Across these feature sets this research compares the aforementioned models. This research concludes that creating features using domain expertise offers a notable improvement in predictive power. Additionally, the autoencoder offers a way to reduce the dimensionality of the data and slightly boost predictive power.", "title": "" }, { "docid": "e4ca92179277334d9113a5580be37998", "text": "This paper presents a systematic design approach for low-profile UWB body-of-revolution (BoR) monopole antennas with specified radiation objectives and size constraints. The proposed method combines a random walk scheme, the genetic algorithm, and a BoR moment method analysis for antenna shape optimization. A weighted global cost function, which minimizes the difference between potential optimal points and a utopia point (optimal design combining 3 different objectives) within the criterion space, is adapted. A 24'' wide and 6'' tall aperture was designed operating from low VHF frequencies up to 2 GHz. This optimized antenna shape reaches -15 dBi gain at 41 MHz on a ground plane and is only λ/12 in aperture width and λ/50 in height at this frequency. The same antenna achieves VSWR <; 3 from 210 MHz up to at least 2 GHz. Concurrently, it maintains a realized gain of ~5 dBi with moderate oscillations across the band of interest. A resistive treatment was further applied at the top antenna rim to improve matching and pattern stability. Measurements are provided for validation of the design. Of importance is that the optimized aperture delivers a larger impedance bandwidth as well as more uniform gain and pattern when compared to a previously published inverted-hat antenna of the same size.", "title": "" }, { "docid": "a45be66a54403701a8271c3063dd24d8", "text": "This paper highlights the role of humans in the next generation of driver assistance and intelligent vehicles. Understanding, modeling, and predicting human agents are discussed in three domains where humans and highly automated or self-driving vehicles interact: 1) inside the vehicle cabin, 2) around the vehicle, and 3) inside surrounding vehicles. Efforts within each domain, integrative frameworks across domains, and scientific tools required for future developments are discussed to provide a human-centered perspective on research in intelligent vehicles.", "title": "" }, { "docid": "55989ee3d7130f150113904778720f28", "text": "Because decisions made by human inspectors often involve subjective judgment, in addition to being intensive and therefore costly, an automated approach for printed circuit board (PCB) inspection is preferred to eliminate subjective discrimination and thus provide fast, quantitative, and dimensional assessments. In this study, defect classification is essential to the identification of defect sources. Therefore, an algorithm for PCB defect classification is presented that consists of well-known conventional operations, including image difference, image subtraction, image addition, counted image comparator, flood-fill, and labeling for the classification of six different defects, namely, missing hole, pinhole, underetch, short-circuit, open-circuit, and mousebite. The defect classification algorithm is improved by incorporating proper image registration and thresholding techniques to solve the alignment and uneven illumination problem. The improved PCB defect classification algorithm has been applied to real PCB images to successfully classify all of the defects.", "title": "" }, { "docid": "f3188f260ae3fbe6f89b583aa2557e7f", "text": "We present the design of Note Code -- a music programming puzzle game designed as a tangible device coupled with a Graphical User Interface (GUI). Tapping patterns and placing boxes in proximity enables programming these \"note-boxes\" to store sets of notes, play them back and activate different sub-components or neighboring boxes. This system provides users the opportunity to learn a variety of computational concepts, including functions, function calling and recursion, conditionals, as well as engage in composing music. The GUI adds a dimension of viewing the created programs and interacting with a set of puzzles that help discover the various computational concepts in the pursuit of creating target tunes, and optimizing the program made.", "title": "" }, { "docid": "a56d43bd191147170e1df87878ca1b11", "text": "Although problem solving is regarded by most educators as among the most important learning outcomes, few instructional design prescriptions are available for designing problem-solving instruction and engaging learners. This paper distinguishes between well-structured problems and ill-structured problems. Well-structured problems are constrained problems with convergent solutions that engage the application of a limited number of rules and principles within welldefined parameters. Ill-structured problems possess multiple solutions, solution paths, fewer parameters which are less manipulable, and contain uncertainty about which concepts, rules, and principles are necessary for the solution or how they are organized and which solution is best. For both types of problems, this paper presents models for how learners solve them and models for designing instruction to support problem-solving skill development. The model for solving wellstructured problems is based on information processing theories of learning, while the model for solving ill-structured problems relies on an emerging theory of ill-structured problem solving and on constructivist and situated cognition approaches to learning. PROBLEM: INSTRUCTIONAL-DESIGN MODELS FOR PROBLEM SOLVING", "title": "" }, { "docid": "3132a06337d94f032c6dfdb7087633cd", "text": "A Virtual Best Solver (VBS) is a hypothetical algorithm that selects the best solver from a given portfolio of alternatives on a per-instance basis. The VBS idealizes performance when all solvers in a portfolio are run in parallel, and also gives a valuable bound on the performance of portfolio-based algorithm selectors. Typically, VBS performance is measured by running every solver in a portfolio once on a given instance and reporting the best performance over all solvers. Here, we argue that doing so results in a flawed measure that is biased to reporting better performance when a randomized solver is present in an algorithm portfolio. Specifically, this flawed notion of VBS tends to show performance better than that achievable by a perfect selector that for each given instance runs the solver with the best expected running time. We report results from an empirical study using solvers and instances submitted to several SAT competitions, in which we observe significant bias on many random instances and some combinatorial instances. We also show that the bias increases with the number of randomized solvers and decreases as we average solver performance over many independent runs per instance. We propose an alternative VBS performance measure by (1) empirically obtaining the solver with best expected performance for each instance and (2) taking bootstrap samples for this solver on every instance, to obtain a confidence interval on VBS performance. Our findings shed new light on widely studied algorithm selection benchmarks and help explain performance gaps observed between VBS and state-of-the-art algorithm selection approaches.", "title": "" } ]
scidocsrr
0642923b608cd6d9e2d8f3455cbc443b
Continuous Path Smoothing for Car-Like Robots Using B-Spline Curves
[ { "docid": "38382c04e7dc46f5db7f2383dcae11fb", "text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.", "title": "" } ]
[ { "docid": "a7be4f9177e6790756b7ede4a2d9ca79", "text": "Metabolomics, or the comprehensive profiling of small molecule metabolites in cells, tissues, or whole organisms, has undergone a rapid technological evolution in the past two decades. These advances have led to the application of metabolomics to defining predictive biomarkers for incident cardiometabolic diseases and, increasingly, as a blueprint for understanding those diseases' pathophysiologic mechanisms. Progress in this area and challenges for the future are reviewed here.", "title": "" }, { "docid": "f4bc0b7aa15de139ddb09e406fc1ce0b", "text": "This paper reviews the problem of catastrophic forgetting (the loss or disruption of previously learned information when new information is learned) in neural networks, and explores rehearsal mechanisms (the retraining of some of the previously learned information as the new information is added) as a potential solution. We replicate some of the experiments described by Ratcliff (1990), including those relating to a simple “recency” based rehearsal regime. We then develop further rehearsal regimes which are more effective than recency rehearsal. In particular “sweep rehearsal” is very successful at minimising catastrophic forgetting. One possible limitation of rehearsal in general, however, is that previously learned information may not be available for retraining. We describe a solution to this problem, “pseudorehearsal”, a method which provides the advantages of rehearsal without actually requiring any access to the previously learned information (the original training population) itself. We then suggest an interpretation of these rehearsal mechanisms in the context of a function approximation based account of neural network learning. Both rehearsal and pseudorehearsal may have practical applications, allowing new information to be integrated into an existing network with minimum disruption of old information.", "title": "" }, { "docid": "712636d3a1dfe2650c0568c8f7cf124c", "text": "Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ’93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn’t change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at https://songhan.github.io/DSD.", "title": "" }, { "docid": "42b9f909251aeb850a1bfcdf7ec3ace4", "text": "Kidney stones are one of the most common chronic disorders in industrialized countries. In patients with kidney stones, the goal of medical therapy is to prevent the formation of new kidney stones and to reduce growth of existing stones. The evaluation of the patient with kidney stones should identify dietary, environmental, and genetic factors that contribute to stone risk. Radiologic studies are required to identify the stone burden at the time of the initial evaluation and to follow up the patient over time to monitor success of the treatment program. For patients with a single stone an abbreviated laboratory evaluation to identify systemic disorders usually is sufficient. For patients with multiple kidney stones 24-hour urine chemistries need to be measured to identify abnormalities that predispose to kidney stones, which guides dietary and pharmacologic therapy to prevent future stone events.", "title": "" }, { "docid": "52315f23e419ba27e6fd058fe8b7aa9d", "text": "Detected obstacles overlaid on the original image Polar map: The agent is at the center of the map, facing 00. The blue points correspond to polar positions of the obstacle points around the agent. 1. Talukder, A., et al. \"Fast and reliable obstacle detection and segmentation for cross-country navigation.\" Intelligent Vehicle SympoTalukder, A., et al. \"Fast and reliable obstacle detection and segmentation for cross-country navigation.\" Intelligent Vehicle Symposium, 2002. IEEE. Vol. 2. IEEE, 2002. 2. Sun, Deqing, Stefan Roth, and Michael J. Black. \"Secrets of optical flow estimation and their principles.\" Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010. 3. Bernini, Nicola, et al. \"Real-time obstacle detection using stereo vision for autonomous ground vehicles: A survey.\" Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on. IEEE, 2014. 4. Broggi, Alberto, et al. \"Stereo obstacle detection in challenging environments: the VIAC experience.\" Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on. IEEE, 2011.", "title": "" }, { "docid": "e56accce9d4ae911e85f5fd2b92a614a", "text": "This paper introduces and documents a novel image database specifically built for the purpose of development and bench-marking of camera-based digital forensic techniques. More than 14,000 images of various indoor and outdoor scenes have been acquired under controlled and thus widely comparable conditions from altogether 73 digital cameras. The cameras were drawn from only 25 different models to ensure that device-specific and model-specific characteristics can be disentangled and studied separately, as validated with results in this paper. In addition, auxiliary images for the estimation of device-specific sensor noise pattern were collected for each camera. Another subset of images to study model-specific JPEG compression algorithms has been compiled for each model. The 'Dresden Image Database' will be made freely available for scientific purposes when this accompanying paper is presented. The database is intended to become a useful resource for researchers and forensic investigators. Using a standard database as a benchmark not only makes results more comparable and reproducible, but it is also more economical and avoids potential copyright and privacy issues that go along with self-sampled benchmark sets from public photo communities on the Internet.", "title": "" }, { "docid": "ac0875c0f01d32315f4ea63049d3a1e1", "text": "Point clouds provide a flexible and scalable geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. Hence, the design of intelligent computational models that act directly on point clouds is critical, especially when efficiency considerations or noise preclude the possibility of expensive denoising and meshing procedures. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv is differentiable and can be plugged into existing architectures. Compared to existing modules operating largely in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked or recurrently applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. Beyond proposing this module, we provide extensive evaluation and analysis revealing that EdgeConv captures and exploits fine-grained geometric properties of point clouds. The proposed approach achieves state-of-the-art performance on standard benchmarks including ModelNet40 and S3DIS. ∗Equal Contribution", "title": "" }, { "docid": "de1f680fd80b20f005dab2ef8067f773", "text": "This paper describes a convolutional neural network based deep learning approach for bird song classification that was used in an audio record-based bird identification challenge, called BirdCLEF 2016. The training and test set contained about 24k and 8.5k recordings, belonging to 999 bird species. The recorded waveforms were very diverse in terms of length and content. We converted the waveforms into frequency domain and splitted into equal segments. The segments were fed into a convolutional neural network for feature learning, which was followed by fully connected layers for classification. In the official scores our solution reached a MAP score of over 40% for main species, and MAP score of over 33% for main species mixed with background species.", "title": "" }, { "docid": "731d9faffc834156d5218a09fbb82e27", "text": "With this paper we take a first step to understand the appropriation of social media by the police. For this purpose we analyzed the Twitter communication by the London Metropolitan Police (MET) and the Greater Manchester Police (GMP) during the riots in August 2011. The systematic comparison of tweets demonstrates that the two forces developed very different practices for using Twitter. While MET followed an instrumental approach in their communication, in which the police aimed to remain in a controlled position and keep a distance to the general public, GMP developed an expressive approach, in which the police actively decreased the distance to the citizens. In workshops and interviews, we asked the police officers about their perspectives, which confirmed the identified practices. Our study discusses benefits and risks of the two approaches and the potential impact of social media on the evolution of the role of police in society.", "title": "" }, { "docid": "a2b9c5f2b6299d0de91d80f9316a02e7", "text": "In this paper, with the help of knowledge base, we build and formulate a semantic space to connect the source and target languages, and apply it to the sequence-to-sequence framework to propose a Knowledge-Based Semantic Embedding (KBSE) method. In our KBSE method, the source sentence is firstly mapped into a knowledge based semantic space, and the target sentence is generated using a recurrent neural network with the internal meaning preserved. Experiments are conducted on two translation tasks, the electric business data and movie data, and the results show that our proposed method can achieve outstanding performance, compared with both the traditional SMT methods and the existing encoder-decoder models.", "title": "" }, { "docid": "288f831e93e83b86d28624e31bb2f16c", "text": "Deep learning has made significant improvements at many image processing tasks in recent years, such as image classification, object recognition and object detection. Convolutional neural networks (CNN), which is a popular deep learning architecture designed to process data in multiple array form, show great success to almost all detection & recognition problems and computer vision tasks. However, the number of parameters in a CNN is too high such that the computers require more energy and larger memory size. In order to solve this problem, we propose a novel energy efficient model Binary Weight and Hadamard-transformed Image Network (BWHIN), which is a combination of Binary Weight Network (BWN) and Hadamard-transformed Image Network (HIN). It is observed that energy efficiency is achieved with a slight sacrifice at classification accuracy. Among all energy efficient networks, our novel ensemble model outperforms other energy efficient models.", "title": "" }, { "docid": "ff4c069ab63ced5979cf6718eec30654", "text": "Dowser is a ‘guided’ fuzzer that combines taint tracking, program analysis and symbolic execution to find buffer overflow and underflow vulnerabilities buried deep in a program’s logic. The key idea is that analysis of a program lets us pinpoint the right areas in the program code to probe and the appropriate inputs to do so. Intuitively, for typical buffer overflows, we need consider only the code that accesses an array in a loop, rather than all possible instructions in the program. After finding all such candidate sets of instructions, we rank them according to an estimation of how likely they are to contain interesting vulnerabilities. We then subject the most promising sets to further testing. Specifically, we first use taint analysis to determine which input bytes influence the array index and then execute the program symbolically, making only this set of inputs symbolic. By constantly steering the symbolic execution along branch outcomes most likely to lead to overflows, we were able to detect deep bugs in real programs (like the nginx webserver, the inspircd IRC server, and the ffmpeg videoplayer). Two of the bugs we found were previously undocumented buffer overflows in ffmpeg and the poppler PDF rendering library.", "title": "" }, { "docid": "875e12852dabbcabe24cc59b764a4226", "text": "As more and more marketers incorporate social media as an integral part of the promotional mix, rigorous investigation of the determinants that impact consumers’ engagement in eWOM via social networks is becoming critical. Given the social and communal characteristics of social networking sites (SNSs) such as Facebook, MySpace and Friendster, this study examines how social relationship factors relate to eWOM transmitted via online social websites. Specifically, a conceptual model that identifies tie strength, homophily, trust, normative and informational interpersonal influence as an important antecedent to eWOM behaviour in SNSs was developed and tested. The results confirm that tie strength, trust, normative and informational influence are positively associated with users’ overall eWOM behaviour, whereas a negative relationship was found with regard to homophily. This study suggests that product-focused eWOM in SNSs is a unique phenomenon with important social implications. The implications for researchers, practitioners and policy makers of social media regulation are discussed.", "title": "" }, { "docid": "4e2bed31e5406e30ae59981fa8395d5b", "text": "Asynchronous Learning Networks (ALNs) make the process of collaboration more transparent, because a transcript of conference messages can be used to assess individual roles and contributions and the collaborative process itself. This study considers three aspects of ALNs: the design; the quality of the resulting knowledge construction process; and cohesion, role and power network structures. The design is evaluated according to the Social Interdependence Theory of Cooperative Learning. The quality of the knowledge construction process is evaluated through Content Analysis; and the network structures are analyzed using Social Network Analysis of the response relations among participants during online discussions. In this research we analyze data from two three-monthlong ALN academic university courses: a formal, structured, closed forum and an informal, nonstructured, open forum. We found that in the structured ALN, the knowledge construction process reached a very high phase of critical thinking and developed cohesive cliques. The students took on bridging and triggering roles, while the tutor had relatively little power. In the non-structured ALN, the knowledge construction process reached a low phase of cognitive activity; few cliques were constructed; most of the students took on the passive role of teacher-followers; and the tutor was at the center of activity. These differences are statistically significant. We conclude that a well-designed ALN develops significant, distinct cohesion, and role and power structures lead the knowledge construction process to high phases of critical thinking.", "title": "" }, { "docid": "6f410e93fa7ab9e9c4a7a5710fea88e2", "text": "We propose a fast, scalable locality-sensitive hashing method for the problem of retrieving similar physiological waveform time series. When compared to the naive k-nearest neighbor search, the method vastly speeds up the retrieval time of similar physiological waveforms without sacrificing significant accuracy. Our result shows that we can achieve 95% retrieval accuracy or better with up to an order of magnitude of speed-up. The extra time required in advance to create the optimal data structure is recovered when query quantity equals 15% of the repository, while the method incurs a trivial additional memory cost. We demonstrate the effectiveness of this method on an arterial blood pressure time series dataset extracted from the ICU physiological waveform repository of the MIMIC-II database.", "title": "" }, { "docid": "cd0bd7ac3aead17068c7f223fc19da60", "text": "In this letter, a class of wideband impedance transformers based on multisection quarter-wave transmission lines and short-circuited stubs are proposed to be incorporated with good passband frequency selectivity. A synthesis approach is then presented to design this two-port asymmetrical transformer with Chebyshev frequency response. For the specified impedance transformation ratio, bandwidth, and in-band return loss, the required impedance parameters can be directly determined. Next, a transformer with two section transmission lines in the middle is characterized, where a set of design curves are given for practical design. Theoretically, the proposed multisection transformer has attained good passband frequency selectivity against the reported counterparts. Finally, a 50-110 Ω impedance transformer with a fractional bandwidth of 77.8% and 15 dB in-band return loss is designed, fabricated and measured to verify the prediction.", "title": "" }, { "docid": "b1a538752056e91fd5800911f36e6eb0", "text": "BACKGROUND\nThe current, so-called \"Millennial\" generation of learners is frequently characterized as having deep understanding of, and appreciation for, technology and social connectedness. This generation of learners has also been molded by a unique set of cultural influences that are essential for medical educators to consider in all aspects of their teaching, including curriculum design, student assessment, and interactions between faculty and learners.\n\n\nAIM\n The following tips outline an approach to facilitating learning of our current generation of medical trainees.\n\n\nMETHOD\n The method is based on the available literature and the authors' experiences with Millennial Learners in medical training.\n\n\nRESULTS\n The 12 tips provide detailed approaches and specific strategies for understanding and engaging Millennial Learners and enhancing their learning.\n\n\nCONCLUSION\n With an increased understanding of the characteristics of the current generation of medical trainees, faculty will be better able to facilitate learning and optimize interactions with Millennial Learners.", "title": "" }, { "docid": "1e4f13016c846039f7bbed47810b8b3d", "text": "This paper characterizes general properties of useful, or Effective, explanations of recommendations. It describes a methodology based on focus groups, in which we elicit what helps moviegoers decide whether or not they would like a movie. Our results highlight the importance of personalizing explanations to the individual user, as well as considering the source of recommendations, user mood, the effects of group viewing, and the effect of explanations on user expectations.", "title": "" }, { "docid": "7f83aa38f6f715285b757e235da04257", "text": "In recent researches on inverter-based distributed generators, disadvantages of traditional grid-connected current control, such as no grid-forming ability and lack of inertia, have been pointed out. As a result, novel control methods like droop control and virtual synchronous generator (VSG) have been proposed. In both methods, droop characteristics are used to control active and reactive power, and the only difference between them is that VSG has virtual inertia with the emulation of swing equation, whereas droop control has no inertia. In this paper, dynamic characteristics of both control methods are studied, in both stand-alone mode and synchronous-generator-connected mode, to understand the differences caused by swing equation. Small-signal models are built to compare transient responses of frequency during a small loading transition, and state-space models are built to analyze oscillation of output active power. Effects of delays in both controls are also studied, and an inertial droop control method is proposed based on the comparison. The results are verified by simulations and experiments. It is suggested that VSG control and proposed inertial droop control inherits the advantages of droop control, and in addition, provides inertia support for the system.", "title": "" }, { "docid": "5467003778aa2c120c36ac023f0df704", "text": "We consider the task of automated estimation of facial expression intensity. This involves estimation of multiple output variables (facial action units — AUs) that are structurally dependent. Their structure arises from statistically induced co-occurrence patterns of AU intensity levels. Modeling this structure is critical for improving the estimation performance; however, this performance is bounded by the quality of the input features extracted from face images. The goal of this paper is to model these structures and estimate complex feature representations simultaneously by combining conditional random field (CRF) encoded AU dependencies with deep learning. To this end, we propose a novel Copula CNN deep learning approach for modeling multivariate ordinal variables. Our model accounts for ordinal structure in output variables and their non-linear dependencies via copula functions modeled as cliques of a CRF. These are jointly optimized with deep CNN feature encoding layers using a newly introduced balanced batch iterative training algorithm. We demonstrate the effectiveness of our approach on the task of AU intensity estimation on two benchmark datasets. We show that joint learning of the deep features and the target output structure results in significant performance gains compared to existing deep structured models for analysis of facial expressions.", "title": "" } ]
scidocsrr
40cefba0f36830d1fc6ef208bb36b496
Choosing an NLP Library for Analyzing Software Documentation: A Systematic Literature Review and a Series of Experiments
[ { "docid": "3be4b0e2e363b4d64611a6a632070329", "text": "Knowledge management plays a central role in many software development organizations. While much of the important technical knowledge can be captured in documentation, there often exists a gap between the information needs of software developers and the documentation structure. To help developers navigate documentation, we developed a technique for automatically extracting tasks from software documentation by conceptualizing tasks as specific programming actions that have been described in the documentation. More than 70 percent of the tasks we extracted from the documentation of two projects were judged meaningful by at least one of two developers. We present TaskNavigator, a user interface for search queries that suggests tasks extracted with our technique in an auto-complete list along with concepts, code elements, and section headers. We conducted a field study in which six professional developers used TaskNavigator for two weeks as part of their ongoing work. We found search results identified through extracted tasks to be more helpful to developers than those found through concepts, code elements, and section headers. The results indicate that task descriptions can be effectively extracted from software documentation, and that they help bridge the gap between documentation structure and the information needs of software developers.", "title": "" }, { "docid": "38a4f83778adea564e450146060ef037", "text": "The last few years have seen a surge in the number of accurate, fast, publicly available dependency parsers. At the same time, the use of dependency parsing in NLP applications has increased. It can be difficult for a non-expert to select a good “off-the-shelf” parser. We present a comparative analysis of ten leading statistical dependency parsers on a multi-genre corpus of English. For our analysis, we developed a new web-based tool that gives a convenient way of comparing dependency parser outputs. Our analysis will help practitioners choose a parser to optimize their desired speed/accuracy tradeoff, and our tool will help practitioners examine and compare parser output.", "title": "" } ]
[ { "docid": "cf751df3c52306a106fcd00eef28b1a4", "text": "Mul-T is a parallel Lisp system, based on Multilisp's future construct, that has been developed to run on an Encore Multimax multiprocessor. Mul-T is an extended version of the Yale T system and uses the T system's ORBIT compiler to achieve “production quality” performance on stock hardware — about 100 times faster than Multilisp. Mul-T shows that futures can be implemented cheaply enough to be useful in a production-quality system. Mul-T is fully operational, including a user interface that supports managing groups of parallel tasks.", "title": "" }, { "docid": "2a2f6ebd553f9d788ca952b8b06b7b6d", "text": "The problem of Approximate Maximum Inner Product (AMIP) search has received increasing attention due to its wide applications. Interestingly, based on asymmetric transformation, the problem can be reduced to the Approximate Nearest Neighbor (ANN) search, and hence leverage Locality-Sensitive Hashing (LSH) to find solution. However, existing asymmetric transformations such as L2-ALSH and XBOX, suffer from large distortion error in reducing AMIP search to ANN search, such that the results of AMIP search can be arbitrarily bad. In this paper, we propose a novel Asymmetric LSH scheme based on Homocentric Hypersphere partition (H2-ALSH) for high-dimensional AMIP search. On the one hand, we propose a novel Query Normalized First (QNF) transformation to significantly reduce the distortion error. On the other hand, by adopting the homocentric hypersphere partition strategy, we can not only improve the search efficiency with early stop pruning, but also get higher search accuracy by further reducing the distortion error with limited data range. Our theoretical studies show that H2-ALSH enjoys a guarantee on search accuracy. Experimental results over four real datasets demonstrate that H2-ALSH significantly outperforms the state-of-the-art schemes.", "title": "" }, { "docid": "d2e434f472b60e17ab92290c78706945", "text": "In recent years, a variety of review-based recommender systems have been developed, with the goal of incorporating the valuable information in user-generated textual reviews into the user modeling and recommending process. Advanced text analysis and opinion mining techniques enable the extraction of various types of review elements, such as the discussed topics, the multi-faceted nature of opinions, contextual information, comparative opinions, and reviewers’ emotions. In this article, we provide a comprehensive overview of how the review elements have been exploited to improve standard content-based recommending, collaborative filtering, and preference-based product ranking techniques. The review-based recommender system’s ability to alleviate the well-known rating sparsity and cold-start problems is emphasized. This survey classifies state-of-the-art studies into two principal branches: review-based user profile building and review-based product profile building. In the user profile sub-branch, the reviews are not only used to create term-based profiles, but also to infer or enhance ratings. Multi-faceted opinions can further be exploited to derive the weight/value preferences that users place on particular features. In another sub-branch, the product profile can be enriched with feature opinions or comparative opinions to better reflect its assessment quality. The merit of each branch of work is discussed in terms of both algorithm development and the way in which the proposed algorithms are evaluated. In addition, we discuss several future trends based on the survey, which may inspire investigators to pursue additional studies in this area.", "title": "" }, { "docid": "5b25230c26cb4f7687b561e20f3da6f3", "text": "This paper presents an approach to optimal design of elastic flywheels using an Injection Island Genetic Algorithm (iiGA). An iiGA in combination with a finite element code is used to search for shape variations to optimize the Specific Energy Density (SED) of elastic flywheels. SED is defined as the amount of rotational energy stored per unit mass. iiGAs seek solutions simultaneously at different levels of refinement of the problem representation (and correspondingly different definitions of the fitness function) in separate sub-populations (islands). Solutions are sought first at low levels of refinement with an axisymmetric plane stress finite element code for high speed exploration of the coarse design space. Next, individuals are injected into populations with a higher level of resolution that uses an axisymmetric three dimensional finite element model to “ fine-tune” the flywheel designs. In true multi -objective optimization, various “sub-fitness” functions can be defined that represent “good” aspects of the overall fitness function. Solutions can be sought for these various “sub-fitness” functions on different nodes and injected into a node that evaluates the overall fitness. Allowing subpopulations to explore different regions of the fitness space simultaneously allows relatively robust and efficient exploration in problems for which fitness evaluations are costly. 1.0 INTRODUCTION This paper will describe the advantages of searching with an axisymmetric plane stress finite element model (with a “sub-fitness” function) to quickly find building blocks needed to inject into an axisymmetric three-dimensional finite element model through use of an iiGA. An optimal annular composite flywheel shape will be sought by an iiGA and, for comparison, by a “ring” topology parallel GA. The flywheel is modeled as a series of concentric rings (see Figure 1). The thickness of each ring varies linearly in the radial direction with the possibilit y for a diverse set of material choices for each ring. Figure 2 shows a typical flywheel model in which symmetry is used to increase computational eff iciency. The overall fitness function for the genetic algorithm GALOPPS was the specific energy density (SED) of a flywheel, which is defined as: SED I mass = 1 2 2 ω 1.) where ω is the angular velocity of the flywheel (“sub-fitness” function), I is the mass moment of inertia defined by:", "title": "" }, { "docid": "56b58efbeab10fa95e0f16ad5924b9e5", "text": "This paper investigates (i) preplanned switching events and (ii) fault events that lead to islanding of a distribution subsystem and formation of a micro-grid. The micro-grid includes two distributed generation (DG) units. One unit is a conventional rotating synchronous machine and the other is interfaced through a power electronic converter. The interface converter of the latter unit is equipped with independent real and reactive power control to minimize islanding transients and maintain both angle stability and voltage quality within the micro-grid. The studies are performed based on a digital computer simulation approach using the PSCAD/EMTDC software package. The studies show that an appropriate control strategy for the power electronically interfaced DG unit can ensure stability of the micro-grid and maintain voltage quality at designated buses, even during islanding transients. This paper concludes that presence of an electronically-interfaced DG unit makes the concept of micro-grid a technically viable option for further investigations.", "title": "" }, { "docid": "4d231af03ac60ccb1a7c17a5defe693a", "text": "This paper describes a hierarchical neural network we propose for sentence classification to extract product information from product documents. The network classifies each sentence in a document into attribute and condition classes on the basis of word sequences and sentence sequences in the document. Experimental results showed the method using the proposed network significantly outperformed baseline methods by taking semantic representation of word and sentence sequential data into account. We also evaluated the network with two different product domains (insurance and tourism domains) and found that it was effective for both the domains.", "title": "" }, { "docid": "093465aba11b82b768e4213b23c5911b", "text": "This paper describes the generation of large deformation diffeomorphisms phi:Omega=[0,1]3<-->Omega for landmark matching generated as solutions to the transport equation dphi(x,t)/dt=nu(phi(x,t),t),epsilon[0,1] and phi(x,0)=x, with the image map defined as phi(.,1) and therefore controlled via the velocity field nu(.,t),epsilon[0,1]. Imagery are assumed characterized via sets of landmarks {xn, yn, n=1, 2, ..., N}. The optimal diffeomorphic match is constructed to minimize a running smoothness cost parallelLnu parallel2 associated with a linear differential operator L on the velocity field generating the diffeomorphism while simultaneously minimizing the matching end point condition of the landmarks. Both inexact and exact landmark matching is studied here. Given noisy landmarks xn matched to yn measured with error covariances Sigman, then the matching problem is solved generating the optimal diffeomorphism phi;(x,1)=integral0(1)nu(phi(x,t),t)dt+x where nu(.)=argmin(nu.)integral1(0) integralOmega parallelLnu(x,t) parallel2dxdt +Sigman=1N[yn-phi(xn,1)] TSigman(-1)[yn-phi(xn,1)]. Conditions for the existence of solutions in the space of diffeomorphisms are established, with a gradient algorithm provided for generating the optimal flow solving the minimum problem. Results on matching two-dimensional (2-D) and three-dimensional (3-D) imagery are presented in the macaque monkey.", "title": "" }, { "docid": "2e8333674a0b9c782aa3796b6475bdf7", "text": "As embedded systems are more than ever present in our society, their security is becoming an increasingly important issue. However, based on the results of many recent analyses of individual firmware images, embedded systems acquired a reputation of being insecure. Despite these facts, we still lack a global understanding of embedded systems’ security as well as the tools and techniques needed to support such general claims. In this paper we present the first public, large-scale analysis of firmware images. In particular, we unpacked 32 thousand firmware images into 1.7 million individual files, which we then statically analyzed. We leverage this large-scale analysis to bring new insights on the security of embedded devices and to underline and detail several important challenges that need to be addressed in future research. We also show the main benefits of looking at many different devices at the same time and of linking our results with other large-scale datasets such as the ZMap’s HTTPS survey. In summary, without performing sophisticated static analysis, we discovered a total of 38 previously unknown vulnerabilities in over 693 firmware images. Moreover, by correlating similar files inside apparently unrelated firmware images, we were able to extend some of those vulnerabilities to over 123 different products. We also confirmed that some of these vulnerabilities altogether are affecting at least 140K devices accessible over the Internet. It would not have been possible to achieve these results without an analysis at such wide scale. We believe that this project, which we plan to provide as a firmware unpacking and analysis web service, will help shed some light on the security of embedded devices. http://firmware.re", "title": "" }, { "docid": "c470e4b10e452bc39e271a195303359b", "text": "This paper presents KeypointNet, an end-to-end geometric reasoning framework to learn an optimal set of category-specific 3D keypoints, along with their detectors. Given a single image, KeypointNet extracts 3D keypoints that are optimized for a downstream task. We demonstrate this framework on 3D pose estimation by proposing a differentiable objective that seeks the optimal set of keypoints for recovering the relative pose between two views of an object. Our model discovers geometrically and semantically consistent keypoints across viewing angles and instances of an object category. Importantly, we find that our end-to-end framework using no ground-truth keypoint annotations outperforms a fully supervised baseline using the same neural network architecture on the task of pose estimation. The discovered 3D keypoints on the car, chair, and plane categories of ShapeNet [6] are visualized at keypointnet.github.io.", "title": "" }, { "docid": "d6039a3f998b33c08b07696dfb1c2ca9", "text": "In this paper, we propose a platform surveillance monitoring system using image processing technology for passenger safety in railway station. The proposed system monitors almost entire length of the track line in the platform by using multiple cameras, and determines in real-time whether a human or dangerous obstacle is in the preset monitoring area by using image processing technology. According to the experimental results, we verity system performance in real condition. Detection of train state and object is conducted robustly by using proposed image processing algorithm. Moreover, to deal with the accident immediately, the system provides local station, central control room and train with the video information and alarm message.", "title": "" }, { "docid": "563abf001fd70dd0027d333f01c5b36c", "text": "We have now confirmed the existence of > 1800 planets orbiting stars other than the Sun; known as extrasolar planets or exoplanets. The different methods for detecting such planets are sensitive to different regions of parameter space, and so, we are discovering a wide diversity of exoplanets and exoplanetary systems. Characterizing such planets is difficult, but we are starting to be able to determine something of their internal composition and are beginning to be able to probe their atmospheres, the first step towards the detection of bio-signatures and, hence, determining if a planet could be habitable or not. Here, I will review how we detect exoplanets, how we characterize exoplanetary systems and the exoplanets themselves, where we stand with respect to potentially habitable planets and how we are progressing towards being able to actually determine if a planet could host life or not.", "title": "" }, { "docid": "4b408cc1c15e6099c16fe0a94923f86e", "text": "Speaker diarization is the task of determining “who spoke when?” in an audio or video recording that contains an unknown amount of speech and also an unknown number of speakers. Initially, it was proposed as a research topic related to automatic speech recognition, where speaker diarization serves as an upstream processing step. Over recent years, however, speaker diarization has become an important key technology for many tasks, such as navigation, retrieval, or higher level inference on audio data. Accordingly, many important improvements in accuracy and robustness have been reported in journals and conferences in the area. The application domains, from broadcast news, to lectures and meetings, vary greatly and pose different problems, such as having access to multiple microphones and multimodal information or overlapping speech. The most recent review of existing technology dates back to 2006 and focuses on the broadcast news domain. In this paper, we review the current state-of-the-art, focusing on research developed since 2006 that relates predominantly to speaker diarization for conference meetings. Finally, we present an analysis of speaker diarization performance as reported through the NIST Rich Transcription evaluations on meeting data and identify important areas for future research.", "title": "" }, { "docid": "63cfa266a73cfbec205ebb189614a8f9", "text": "Big data analytics (BDA) has emerged as an important area of study for both academics and practitioners. Despite of rising potential value of BDA, a few studies have been conducted to investigate the effect of BDA on firm performance. In this research in progress, according to the challenges of BDA dimensions (volume, variety, velocity, veracity and value) we propose the BDA capability dimensions in line with IT capability concept. BDA infrastructure capability, BDA management capability, BDA personnel capability and relational BDA capability provide the overall BDA Capability concept. The study, by employing dynamic capability, proposes that BDA capability impacts on firm financial and market performance by mediated effect of operational performance. The finding of this research by providing essential BDA capability and its effect on firm performance can apply as a roadmap and fill the gap between managers’ expectation of BDA and what is emerged of BDA implementation.", "title": "" }, { "docid": "a8d5ab31f28ef184c0087ea10524435c", "text": "With the increasing popularization of radio frequency identification RFID technology in the retail and logistics industry, RFID privacy concern has attracted much attention, because a tag responds to queries from readers no matter they are authorized or not. An effective solution is to use a commercially available blocker tag that behaves as if a set of tags with known blocking IDs are present. However, the use of blocker tags makes the classical RFID estimation problem much more challenging, as some genuine tag IDs are covered by the blocker tag and some are not. In this paper, we propose RFID estimation scheme with blocker tags REB, the first RFID estimation scheme with the presence of blocker tags. REB uses the framed slotted Aloha protocol specified in the EPC C1G2 standard. For each round of the Aloha protocol, REB first executes the protocol on the genuine tags and the blocker tag, and then virtually executes the protocol on the known blocking IDs using the same Aloha protocol parameters. REB conducts statistical inference from the two sets of responses and estimates the number of genuine tags. Rigorous theoretical analysis of parameter settings is proposed to guarantee the required estimation accuracy, meanwhile minimizing the time cost and energy cost of REB. We also reveal a fundamental tradeoff between the time cost and energy cost of REB, which can be flexibly adjusted by the users according to the practical requirements. Extensive experimental results reveal that REB significantly outperforms the state-of-the-art identification protocols in terms of both time efficiency and energy efficiency.", "title": "" }, { "docid": "a9abef2213a7a24ec87aef11888d7854", "text": "Mechanical ventilation (MV) remains the cornerstone of acute respiratory distress syndrome (ARDS) management. It guarantees sufficient alveolar ventilation, high FiO2 concentration, and high positive end-expiratory pressure levels. However, experimental and clinical studies have accumulated, demonstrating that MV also contributes to the high mortality observed in patients with ARDS by creating ventilator-induced lung injury. Under these circumstances, extracorporeal lung support (ECLS) may be beneficial in two distinct clinical settings: to rescue patients from the high risk for death associated with severe hypoxemia, hypercapnia, or both not responding to maximized conventional MV, and to replace MV and minimize/abolish the harmful effects of ventilator-induced lung injury. High extracorporeal blood flow venovenous extracorporeal membrane oxygenation (ECMO) may therefore rescue the sickest patients with ARDS from the high risk for death associated with severe hypoxemia, hypercapnia, or both not responding to maximized conventional MV. Successful venovenous ECMO treatment in patients with extremely severe H1N1-associated ARDS and positive results of the CESAR trial have led to an exponential use of the technology in recent years. Alternatively, lower-flow extracorporeal CO2 removal devices may be used to reduce the intensity of MV (by reducing Vt from 6 to 3-4 ml/kg) and to minimize or even abolish the harmful effects of ventilator-induced lung injury if used as an alternative to conventional MV in nonintubated, nonsedated, and spontaneously breathing patients. Although conceptually very attractive, the use of ECLS in patients with ARDS remains controversial, and high-quality research is needed to further advance our knowledge in the field.", "title": "" }, { "docid": "cae4703a50910c7718284c6f8230a4bc", "text": "Autonomous helicopter flight is widely regarded to be a highly challenging control problem. Despite this fact, human experts can reliably fly helicopters through a wide range of maneuvers, including aerobatic maneuvers at the edge of the helicopter’s capabilities. We present apprenticeship learning algorithms, which leverage expert demonstrations to efficiently learn good controllers for tasks being demonstrated by an expert. These apprenticeship learning algorithms have enabled us to significantly extend the state of the art in autonomous helicopter aerobatics. Our experimental results include the first autonomous execution of a wide range of maneuvers, including but not limited to in-place flips, in-place rolls, loops and hurricanes, and even auto-rotation landings, chaos and tic-tocs, which only exceptional human pilots can perform. Our results also include complete airshows, which require autonomous transitions between many of these maneuvers. Our controllers perform as well as, and often even better than, our expert pilot.", "title": "" }, { "docid": "103b9c101d18867f25d605bd0b314c51", "text": "The human visual system is the most complex pattern recognition device known. In ways that are yet to be fully understood, the visual cortex arrives at a simple and unambiguous interpretation of data from the retinal image that is useful for the decisions and actions of everyday life. Recent advances in Bayesian models of computer vision and in the measurement and modeling of natural image statistics are providing the tools to test and constrain theories of human object perception. In turn, these theories are having an impact on the interpretation of cortical function.", "title": "" }, { "docid": "6a91c45e0cfac9dd472f68aec15889eb", "text": "UNLABELLED\nThe Insight Toolkit offers plenty of features for multidimensional image analysis. Current implementations, however, often suffer either from a lack of flexibility due to hard-coded C++ pipelines for a certain task or by slow execution times, e.g. caused by inefficient implementations or multiple read/write operations for separate filter execution. We present an XML-based wrapper application for the Insight Toolkit that combines the performance of a pure C++ implementation with an easy-to-use graphical setup of dynamic image analysis pipelines. Created XML pipelines can be interpreted and executed by XPIWIT in console mode either locally or on large clusters. We successfully applied the software tool for the automated analysis of terabyte-scale, time-resolved 3D image data of zebrafish embryos.\n\n\nAVAILABILITY AND IMPLEMENTATION\nXPIWIT is implemented in C++ using the Insight Toolkit and the Qt SDK. It has been successfully compiled and tested under Windows and Unix-based systems. Software and documentation are distributed under Apache 2.0 license and are publicly available for download at https://bitbucket.org/jstegmaier/xpiwit/downloads/.\n\n\nCONTACT\njohannes.stegmaier@kit.edu\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "86d705256c19f63dac90162b33818a9b", "text": "Despite the recent success of deep-learning based semantic segmentation, deploying a pre-trained road scene segmenter to a city whose images are not presented in the training set would not achieve satisfactory performance due to dataset biases. Instead of collecting a large number of annotated images of each city of interest to train or refine the segmenter, we propose an unsupervised learning approach to adapt road scene segmenters across different cities. By utilizing Google Street View and its timemachine feature, we can collect unannotated images for each road scene at different times, so that the associated static-object priors can be extracted accordingly. By advancing a joint global and class-specific domain adversarial learning framework, adaptation of pre-trained segmenters to that city can be achieved without the need of any user annotation or interaction. We show that our method improves the performance of semantic segmentation in multiple cities across continents, while it performs favorably against state-of-the-art approaches requiring annotated training data.", "title": "" }, { "docid": "885b7e9fb662d938fc8264597fa070b8", "text": "Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks. The most efficient and popular approaches learn or retrofit such representations using additional external data. Resulting embeddings are generally better than their corpus-only counterparts, although such resources cover a fraction of words in the vocabulary. In this paper, we propose a new approach, Dict2vec, based on one of the largest yet refined datasource for describing words – natural language dictionaries. Dict2vec builds new word pairs from dictionary entries so that semantically-related words are moved closer, and negative sampling filters out pairs whose words are unrelated in dictionaries. We evaluate the word representations obtained using Dict2vec on eleven datasets for the word similarity task and on four datasets for a text classification task.", "title": "" } ]
scidocsrr
a68f2ec074f2a3d07284359f4fd74643
Active contours with selective local or global segmentation: A new formulation and level set method
[ { "docid": "b348a2835a16ac271f2140f9057dcaa1", "text": "The variational method has been introduced by Kass et al. (1987) in the field of object contour modeling, as an alternative to the more traditional edge detection-edge thinning-edge sorting sequence. since the method is based on a pre-processing of the image to yield an edge map, it shares the limitations of the edge detectors it uses. in this paper, we propose a modified variational scheme for contour modeling, which uses no edge detection step, but local computations instead—only around contour neighborhoods—as well as an “anticipating” strategy that enhances the modeling activity of deformable contour curves. many of the concepts used were originally introduced to study the local structure of discontinuity, in a theoretical and formal statement by leclerc & zucker (1987), but never in a practical situation such as this one. the first part of the paper introduces a region-based energy criterion for active contours, and gives an examination of its implications, as compared to the gradient edge map energy of snakes. then, a simplified optimization scheme is presented, accounting for internal and external energy in separate steps. this leads to a complete treatment, which is described in the last sections of the paper (4 and 5). the optimization technique used here is mostly heuristic, and is thus presented without a formal proof, but is believed to fill a gap between snakes and other useful image representations, such as split-and-merge regions or mixed line-labels image fields.", "title": "" }, { "docid": "c73623dd471b82bb8ab1308d31b14713", "text": "It's coming again, the new collection that this site has. To complete your curiosity, we offer the favorite mathematical problems in image processing partial differential equations and the calculus of variations book as the choice today. This is a book that will show you even new to old thing. Forget it; it will be right for you. Well, when you are really dying of mathematical problems in image processing partial differential equations and the calculus of variations, just pick it. You know, this book is always making the fans to be dizzy if not to find.", "title": "" } ]
[ { "docid": "4ea81c5e995d074998ba34a820c3de1c", "text": "We address the delicate problem of offsetting polygonal meshes. Offsetting is important for stereolithography, NC machining, rounding corners, collision avoidance, and Hausdorff error calculation. We introduce a new fast, and very simple method for offsetting (growing and shrinking) a solid model by arbitrary distance r. Our approach is based on a hybrid data structure combining point samples, voxels, and continuous surfaces. Each face, edge, and vertex of the original solid generate a set of offset points spaced along the (pencil of) normals associated with it. The offset points and normals are sufficiently dense to ensure that all voxels between the original and the offset surfaces are properly labeled as either too close to the original solid or possibly containing the offset surface. Then the offset boundary is generated as the isosurface using these voxels and the associated offset points. We provide a tight error bound on the resulting surface and report experimental results on a variety of CAD models.", "title": "" }, { "docid": "23676a52e1ed03d7b5c751a9986a7206", "text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of", "title": "" }, { "docid": "0fbd2e65c5d818736486ffb1ec5e2a6d", "text": "We establish linear profile decompositions for the fourth order linear Schrödinger equation and for certain fourth order perturbations of the linear Schrödinger equation, in dimensions greater than or equal to two. We apply these results to prove dichotomy results on the existence of extremizers for the associated Stein–Tomas/Strichartz inequalities; along the way, we also obtain lower bounds for the norms of these operators.", "title": "" }, { "docid": "8a9cf6b4d7d6d2be1d407ef41ceb23e5", "text": "A highly discriminative and computationally efficient descriptor is needed in many computer vision applications involving human action recognition. This paper proposes a hand-crafted skeleton-based descriptor for human action recognition. It is constructed from five fixed size covariance matrices calculated using strongly related joints coordinates over five body parts (spine, left/ right arms, and left/ right legs). Since covariance matrices are symmetric, the lower/ upper triangular parts of these matrices are concatenated to generate an efficient descriptor. It achieves a saving from 78.26 % to 80.35 % in storage space and from 75 % to 90 % in processing time (depending on the dataset) relative to techniques adopting a covariance descriptor based on all the skeleton joints. To show the effectiveness of the proposed method, its performance is evaluated on five public datasets: MSR-Action3D, MSRC-12 Kinect Gesture, UTKinect-Action, Florence3D-Action, and NTU RGB+D. The obtained recognition rates on all datasets outperform many existing methods and compete with the current state of the art techniques.", "title": "" }, { "docid": "77cde25333d33a8c4b13de914c17effb", "text": "Machine Learning has been the quintessential solution for many AI problems, but learning models are heavily dependent on specific training data. Some learning models can be incorporated with prior knowledge using a Bayesian setup, but these learning models do not have the ability to access any organized world knowledge on demand. In this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks. Our aim is to develop a deep learning model that can extract relevant prior support facts from knowledge graphs depending on the task using attention mechanism. We introduce a convolutionbased model for learning representations of knowledge graph entity and relation clusters in order to reduce the attention space. We show that the proposed method is highly scalable to the amount of prior information that has to be processed and can be applied to any generic NLP task. Using this method we show significant improvement in performance for text classification with 20Newsgroups (News20) & DBPedia datasets, and natural language inference with Stanford Natural Language Inference (SNLI) dataset. We also demonstrate that a deep learning model can be trained with substantially less amount of labeled training data, when it has access to organized world knowledge in the form of a knowledge base.", "title": "" }, { "docid": "97ba22fa685384e9dfd0402798fe7019", "text": "We consider the problems of i) using public-key encryption to enforce dynamic access control on clouds; and ii) key rotation of data stored on clouds. Historically, proxy re-encryption, ciphertext delegation, and related technologies have been advocated as tools that allow for revocation and the ability to cryptographically enforce dynamic access control on the cloud, and more recently they have suggested for key rotation of data stored on clouds. Current literature frequently assumes that data is encrypted directly with public-key encryption primitives. However, for efficiency reasons systems would need to deploy with hybrid encryption. Unfortunately, we show that if hybrid encryption is used, then schemes are susceptible to a key-scraping attack. Given a proxy re-encryption or delegation primitive, we show how to construct a new hybrid scheme that is resistant to this attack and highly efficient. The scheme only requires the modification of a small fraction of the bits of the original ciphertext. The number of modifications scales linearly with the security parameter and logarithmically with the file length: it does not require the entire symmetric-key ciphertext to be re-encrypted! Beyond the construction, we introduce new security definitions for the problem at hand, prove our construction secure, discuss use cases, and provide quantitative data showing its practical benefits and efficiency. We show the construction extends to identity-based proxy re-encryption and revocable-storage attribute-based encryption, and thus that the construction is robust, supporting most primitives of interest.", "title": "" }, { "docid": "54de8d4b8a06572dc30d1b8699eb38e5", "text": "OBJECTIVE\nSpecialised pressure-relieving supports reduce or relieve the interface pressure between the skin and the support surface. The comparative effectiveness of dynamic support surfaces is debated. The aim of this study is to examine the impact of using an alternating pressure air mattress (APAM) on pressure ulcer (PU) incidence in patients receiving home-based care. A second aim was to determine the level of patient/family satisfaction with comfort and gain the views of the care team that used the APAM.\n\n\nMETHOD\nThe PARESTRY study was a prospective observational study conducted in patients with a high risk of PUs (Braden score <15), discharged to hospital-care at home. The primary prevention groups consisted of patients with no PU at baseline who were in bed for at least 20 hours a day. Patients at baseline with a category 3 or 4 PU or a category 1 or 2 PU in association with poor general health or end-of-life status were included in the secondary prevention group. All patients were laid on an APAM. The primary end point was the % of patients with a worsening skin condition in the pressure area (heel, sacrum, ischium) at day 90 or at the end of the study. The primary analysis was done on the full analysis set (patients included with at least a second assessment), using the last observation carried forward technique to handle missing data, at day 90. A 95% confidence interval was calculated.\n\n\nRESULTS\nAnalysis was performed on 92 patients (30 in primary prevention and 62 in secondary prevention). The average time spent in bed was 22.7 (SD 2.7) hours a day and 22.6 (SD 2.2) hours in the primary and secondary prevention groups, respectively. At baseline, in the secondary group, 77% of patients had a sacral PU, 63% a heel PU, 8% an ischial tuberosity PU and 45% a PU in another area, a number of patients having multiple PUs. In the primary prevention group, 63% (19/30) of patients dropped out of the study (5 were hospitalised, 9 died, 5 other causes). In the secondary prevention group, 61% (38/62) dropped out (7 were hospitalised, 23 died, 8 others causes). In the primary prevention group, only one patient had worsening skin condition. In the secondary prevention group, 17.7% (11/62: 95% CI: 8.3-27.2) of patients had worsening skin condition. The number of PUs decreased regardless of location. At the end of follow-up, 49% (45/92) of patients had a PU versus 67% (62/92) at baseline\n\n\nCONCLUSION\nThis work provides data on the incidence of PUs in patients at high risk, who are using APAMs, and, following inpatient hospitalisation, are taken into home health-care centres. The results of the study highlight the importance of continuity of care across transitions between care settings.", "title": "" }, { "docid": "97d7281f14c9d9e745fe6f63044a7d91", "text": "The Long Term Evolution (LTE) is the latest mobile standard being implemented globally to provide connectivity and access to advanced services for personal mobile devices. Moreover, LTE networks are considered to be one of the main pillars for the deployment of Machine to Machine (M2M) communication systems and the spread of the Internet of Things (IoT). As an enabler for advanced communications services with a subscription count in the billions, security is of capital importance in LTE. Although legacy GSM (Global System for Mobile Communications) networks are known for being insecure and vulnerable to rogue base stations, LTE is assumed to guarantee confidentiality and strong authentication. However, LTE networks are vulnerable to security threats that tamper availability, privacy and authentication. This manuscript, which summarizes and expands the results presented by the author at ShmooCon 2016 [1], investigates the insecurity rationale behind LTE protocol exploits and LTE rogue base stations based on the analysis of real LTE radio link captures from the production network. Implementation results are discussed from the actual deployment of LTE rogue base stations, IMSI catchers and exploits that can potentially block a mobile device. A previously unknown technique to potentially track the location of mobile devices as they move from cell to cell is also discussed, with mitigations being proposed.", "title": "" }, { "docid": "5f40ac6afd39e3d2fcbc5341bc3af7b4", "text": "We present a modified quasi-Yagi antenna for use in WLAN access points. The antenna uses a new microstrip-to-coplanar strip (CPS) transition, consisting of a tapered microstrip input, T-junction, conventional 50-ohm microstrip line, and three artificial transmission line (ATL) sections. The design concept, mode conversion scheme, and simulated and experimental S-parameters of the transition are discussed first. It features a compact size, and a 3dB-insertion loss bandwidth of 78.6%. Based on the transition, a modified quasi-Yagi antenna is demonstrated. In addition to the new transition, the antenna consists of a CPS feed line, a meandered dipole, and a parasitic element. The meandered dipole can substantially increase to the front-to-back ratio of the antenna without sacrificing the operating bandwidth. The parasitic element is placed in close proximity to the driven element to improve impedance bandwidth and radiation characteristics. The antenna exhibits excellent end-fire radiation with a front-to-back ratio of greater than 15 dB. It features a moderate gain around 4 dBi, and a fractional bandwidth of 38.3%. We carefully investigate the concept, methodology, and experimental results of the proposed antenna.", "title": "" }, { "docid": "7f8ee14d2d185798c3864178bd450f3d", "text": "In this paper, a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. To estimate the speed of the incoming vehicles, we first use cross correlation and wavelet transform-based methods to estimate the time delay between the signals of different sensors. We then propose a calibration and self-correction model based on Bayesian Networks to make a joint inference by all sensors about the speed and the length of the detected vehicle. Furthermore, we use the measurements of the ultrasonic and the PIR sensors to perform vehicle classification. Validation data (using an experimental dual infrared and ultrasonic traffic sensor) show a 99% accuracy in vehicle detection, a mean error of 5 kph in vehicle speed estimation, a mean error of 0.7m in vehicle length estimation, and a high accuracy in vehicle classification. Finally, we discuss the computational performance of the algorithm, and show that this framework can be implemented on low-power computational devices within a wireless sensor network setting. Such decentralized processing greatly improves the energy consumption of the system and minimizes bandwidth usage.", "title": "" }, { "docid": "14fcb5c784de5fcb6950212f5b3eabb4", "text": "This paper presents a pure textile, capacitive pressure sensor designed for integration into clothing to measure pressure on human body. The applications fields cover all domains where a soft and bendable sensor with a high local resolution is needed, e.g. in rehabilitation, pressure-sore prevention or motion detection due to muscle activities. We developed several textile sensors with spatial resolution of 2 times 2 cm and an average error below 4 percent within the measurement range 0 to 10 N/cm2. Applied on the upper arm the textile pressure sensor determines the deflection of the forearm between 0 and 135 degrees due to the muscle bending.", "title": "" }, { "docid": "7e152f2fcd452e67f52b4a5165950f2d", "text": "This paper describes a framework that allows fine-grained and flexible access control to connected devices with very limited processing power and memory. We propose a set of security and performance requirements for this setting and derive an authorization framework distributing processing costs between constrained devices and less constrained back-end servers while keeping message exchanges with the constrained devices at a minimum. As a proof of concept we present performance results from a prototype implementing the device part of the framework.", "title": "" }, { "docid": "631cd44345606641454e9353e071f2c5", "text": "Microblogs are rich sources of information because they provide platforms for users to share their thoughts, news, information, activities, and so on. Twitter is one of the most popular microblogs. Twitter users often use hashtags to mark specific topics and to link them with related tweets. In this study, we investigate the relationship between the music listening behaviors of Twitter users and a popular music ranking service by comparing information extracted from tweets with music-related hashtags and the Billboard chart. We collect users' music listening behavior from Twitter using music-related hashtags (e.g., #nowplaying). We then build a predictive model to forecast the Billboard rankings and hit music. The results show that the numbers of daily tweets about a specific song and artist can be effectively used to predict Billboard rankings and hits. This research suggests that users' music listening behavior on Twitter is highly correlated with general music trends and could play an important role in understanding consumers' music consumption patterns. In addition, we believe that Twitter users' music listening behavior can be applied in the field of Music Information Retrieval (MIR).", "title": "" }, { "docid": "c974abaccde124be43d8fa9779f139dc", "text": "An articulated trajectory is defined as a trajectory that remains at a fixed distance with respect to a parent trajectory. In this paper, we present a method to reconstruct an articulated trajectory in three dimensions given the two dimensional projection of the articulated trajectory, the 3D parent trajectory, and the camera pose at each time instant. This is a core challenge in reconstructing the 3D motion of articulated structures such as the human body because endpoints of each limb form articulated trajectories. We simultaneously apply activity-independent spatial and temporal constraints, in the form of fixed 3D distance to the parent trajectory and smooth 3D motion. There exist two solutions that satisfy each instantaneous 2D projection and articulation constraint (a ray intersects a sphere at up to two locations) and we show that resolving this ambiguity by enforcing smoothness is equivalent to solving a binary quadratic programming problem. A geometric analysis of the reconstruction of articulated trajectories is also presented and a measure of the reconstructibility of an articulated trajectory is proposed.", "title": "" }, { "docid": "c4043bfa8cfd74f991ac13ce1edd5bf5", "text": "Citations between scientific papers and related bibliometric indices, such as the h-index for authors and the impact factor for journals, are being increasingly used – often in controversial ways – as quantitative tools for research evaluation. Yet, a fundamental research question remains still open: to which extent do quantitative metrics capture the significance of scientific works? We analyze the network of citations among the 449, 935 papers published by the American Physical Society (APS) journals between 1893 and 2009, and focus on the comparison of metrics built on the citation count with network-based metrics. We contrast five article-level metrics with respect to the rankings that they assign to a set of fundamental papers, called Milestone Letters, carefully selected by the APS editors for “making long-lived contributions to physics, either by announcing significant discoveries, or by initiating new areas of research”. A new metric, which combines PageRank centrality with the explicit requirement that paper score is not biased by paper age, is the best-performing metric overall in identifying the Milestone Letters. The lack of time bias in the new metric makes it also possible to use it to compare papers of different age on the same scale. We find that networkbased metrics identify the Milestone Letters better than metrics based on the citation count, which suggests that the structure of the citation network contains information that can be used to improve the ranking of scientific publications. The methods and results presented here are relevant for all evolving systems where network centrality metrics are applied, for example the World Wide Web and online social networks.", "title": "" }, { "docid": "5ce09c28a3377046651fe84b7724caf5", "text": "We propose a simple protocol for authentication using only a password. The result of the protocol is a cryptographically strong shared secret for securing other data - e.g. network communication. SAE is resistant to passive attack, active attack, and dictionary attack. It provides a secure alternative to using certificates or when a centralized authority is not available. It is a peer-to-peer protocol, has no asymmetry, and supports simultaneous initiation. It is therefore well-suited for use in mesh networks. It supports the ability to tradeoff speed for strength of the resulting shared key. SAE has been implemented for 802.11-based mesh networks and can easily be adapted to other wireless mesh technology.", "title": "" }, { "docid": "97a817932c3fc43906cfd451ac8964da", "text": "Data science and machine learning are the key technologies when it comes to the processes and products with automatic learning and optimization to be used in the automotive industry of the future. This article defines the terms “data science” (also referred to as “data analytics”) and “machine learning” and how they are related. In addition, it defines the term “optimizing analytics“ and illustrates the role of automatic optimization as a key technology in combination with data analytics. It also uses examples to explain the way that these technologies are currently being used in the automotive industry on the basis of the major subprocesses in the automotive value chain (development, procurement; logistics, production, marketing, sales and after-sales, connected customer). Since the industry is just starting to explore the broad range of potential uses for these technologies, visionary application examples are used to illustrate the revolutionary possibilities that they offer. Finally, the article demonstrates how these technologies can make the automotive industry more efficient and enhance its customer focus throughout all its operations and activities, extending from the product and its development process to the customers and their connection to the product.", "title": "" }, { "docid": "5ea5650e03be82a600159c2095c387b6", "text": "The medicinal plants are widely used by the traditional medicinal practitioners for curing various diseases in their day to day practice. In traditional system of medicine, different parts (leaves, stem, flower, root, seeds and even whole plant) of Ocimum sanctum Linn. have been recommended for the treatment of bronchitis, malaria, diarrhea, dysentery, skin disease, arthritis, eye diseases, insect bites and so on. The O. sanctum L. has also been suggested to possess anti-fertility, anticancer, antidiabetic, antifungal, antimicrobial, cardioprotective, analgesic, antispasmodic and adaptogenic actions. Eugenol (1-hydroxy-2-methoxy-4-allylbenzene), the active constituents present in O. sanctum L. have been found to be largely responsible for the therapeutic potentials. The pharmacological studies reported in the present review confirm the therapeutic value of O. sanctum L. The results of the above studies support the use of this plant for human and animal disease therapy and reinforce the importance of the ethno-botanical approach as a potential source of bioactive substances.", "title": "" }, { "docid": "4be71eccf611b7bdffb708f8cfa2613d", "text": "Many natural and social systems develop complex networks that are usually modeled as random graphs. The eigenvalue spectrum of these graphs provides information about their structural properties. While the semicircle law is known to describe the spectral densities of uncorrelated random graphs, much less is known about the spectra of real-world graphs, describing such complex systems as the Internet, metabolic pathways, networks of power stations, scientific collaborations, or movie actors, which are inherently correlated and usually very sparse. An important limitation in addressing the spectra of these systems is that the numerical determination of the spectra for systems with more than a few thousand nodes is prohibitively time and memory consuming. Making use of recent advances in algorithms for spectral characterization, here we develop methods to determine the eigenvalues of networks comparable in size to real systems, obtaining several surprising results on the spectra of adjacency matrices corresponding to models of real-world graphs. We find that when the number of links grows as the number of nodes, the spectral density of uncorrelated random matrices does not converge to the semicircle law. Furthermore, the spectra of real-world graphs have specific features, depending on the details of the corresponding models. In particular, scale-free graphs develop a trianglelike spectral density with a power-law tail, while small-world graphs have a complex spectral density consisting of several sharp peaks. These and further results indicate that the spectra of correlated graphs represent a practical tool for graph classification and can provide useful insight into the relevant structural properties of real networks.", "title": "" }, { "docid": "073b17e195cec320c20533f154d4ab7f", "text": "Automatic segmentation of cell nuclei is an essential step in image cytometry and histometry. Despite substantial progress, there is a need to improve accuracy, speed, level of automation, and adaptability to new applications. This paper presents a robust and accurate novel method for segmenting cell nuclei using a combination of ideas. The image foreground is extracted automatically using a graph-cuts-based binarization. Next, nuclear seed points are detected by a novel method combining multiscale Laplacian-of-Gaussian filtering constrained by distance-map-based adaptive scale selection. These points are used to perform an initial segmentation that is refined using a second graph-cuts-based algorithm incorporating the method of alpha expansions and graph coloring to reduce computational complexity. Nuclear segmentation results were manually validated over 25 representative images (15 in vitro images and 10 in vivo images, containing more than 7400 nuclei) drawn from diverse cancer histopathology studies, and four types of segmentation errors were investigated. The overall accuracy of the proposed segmentation algorithm exceeded 86%. The accuracy was found to exceed 94% when only over- and undersegmentation errors were considered. The confounding image characteristics that led to most detection/segmentation errors were high cell density, high degree of clustering, poor image contrast and noisy background, damaged/irregular nuclei, and poor edge information. We present an efficient semiautomated approach to editing automated segmentation results that requires two mouse clicks per operation.", "title": "" } ]
scidocsrr
f76831e70b7cf9ed3cc70387913f5c4e
Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning
[ { "docid": "5d4797cffc06cbde079bf4019dc196db", "text": "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and/or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)&#x2014;a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.", "title": "" } ]
[ { "docid": "4b75c7158f6c20542385d08eca9bddb3", "text": "PURPOSE\nExtraarticular manifestations of the joint hypermobility syndrome may include the peripheral nervous system. The purpose of this study was to investigate autonomic function in patients with this syndrome.\n\n\nMETHODS\nForty-eight patients with the joint hypermobility syndrome who fulfilled the 1998 Brighton criteria and 30 healthy control subjects answered a clinical questionnaire designed to evaluate the frequency of complaints related to the autonomic nervous system. Next, 27 patients and 21 controls underwent autonomic evaluation: orthostatic testing, cardiovascular vagal and sympathetic functions, catecholamine levels, and adrenoreceptor responsiveness.\n\n\nRESULTS\nSymptoms related to the autonomic nervous system, such as syncope and presyncope, palpitations, chest discomfort, fatigue, and heat intolerance, were significantly more common among patients. Orthostatic hypotension, postural orthostatic tachycardia syndrome, and uncategorized orthostatic intolerance were found in 78% (21/27) of patients compared with in 10% (2/21) of controls. Patients with the syndrome had a greater mean (+/- SD) drop in systolic blood pressure during hyperventilation than did controls (-11 +/- 7 mm Hg vs. -5 +/- 5 mm Hg, P = 0.02) and a greater increase in systolic blood pressure after a cold pressor test (19 +/- 10 mm Hg vs. 11 +/- 13 mm Hg, P = 0.06). Patients with the syndrome also had evidence of alpha-adrenergic (as assessed by administration of phenylephrine) and beta-adrenergic hyperresponsiveness (as assessed by administration of isoproterenol).\n\n\nCONCLUSION\nThe autonomic nervous system-related symptoms of the patients have a pathophysiological basis, which suggests that dysautonomia is an extraarticular manifestation in the joint hypermobility syndrome.", "title": "" }, { "docid": "6c99c86d994460f3314865f0da2f57e4", "text": "BACKGROUND\nThresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid.\n\n\nMETHODS\nSeveral methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed.\n\n\nRESULTS\nFor a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a 'null' effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results.\n\n\nCONCLUSIONS\nIf the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials.", "title": "" }, { "docid": "45df307e591eb146c1313686e345dede", "text": "A high-precision CMOS time-to-digital converter IC has been designed. Time interval measurement is based on a counter and two-level interpolation realized with stabilized delay lines. Reference recycling in the delay line improves the integral nonlinearity of the interpolator and enables the use of a low frequency reference clock. Multi-level interpolation reduces the number of delay elements and registers and lowers the power consumption. The load capacitor scaled parallel structure in the delay line permits very high resolution. An INL look-up table reduces the effect of the remaining nonlinearity. The digitizer measures time intervals from 0 to 204 /spl mu/s with 8.1 ps rms single-shot precision. The resolution of 12.2 ps from a 5-MHz external reference clock is divided by means of only 20 delay elements.", "title": "" }, { "docid": "78b371e7df39a1ebbad64fdee7303573", "text": "This state of the art report focuses on glyph-based visualization, a common form of visual design where a data set is depicted by a collection of visual objects referred to as glyphs. Its major strength is that patterns of multivariate data involving more than two attribute dimensions can often be more readily perceived in the context of a spatial relationship, whereas many techniques for spatial data such as direct volume rendering find difficult to depict with multivariate or multi-field data, and many techniques for non-spatial data such as parallel coordinates are less able to convey spatial relationships encoded in the data. This report fills several major gaps in the literature, drawing the link between the fundamental concepts in semiotics and the broad spectrum of glyph-based visualization, reviewing existing design guidelines and implementation techniques, and surveying the use of glyph-based visualization in many applications.", "title": "" }, { "docid": "894f5289293a72084647e07f8e7423f7", "text": "Convolutional Neural Networks (CNNs) have been widely adopted for many imaging applications. For image aesthetics prediction, state-of-the-art algorithms train CNNs on a recently-published large-scale dataset, AVA. However, the distribution of the aesthetic scores on this dataset is extremely unbalanced, which limits the prediction capability of existing methods. We overcome such limitation by using weighted CNNs. We train a regression model that improves the prediction accuracy of the aesthetic scores over state-of-the-art algorithms. In addition, we propose a novel histogram prediction model that not only predicts the aesthetic score, but also estimates the difficulty of performing aesthetics assessment for an input image. We further show an image enhancement application where we obtain an aesthetically pleasing crop of an input image using our regression model.", "title": "" }, { "docid": "688ee7a4bde400a6afbd6972d729fad4", "text": "Learning-to-Rank ( LtR ) techniques leverage machine learning algorithms and large amounts of training data to induce high-quality ranking functions. Given a set of documents and a user query, these functions are able to precisely predict a score for each of the documents, in turn exploited to effectively rank them. Although the scoring efficiency of LtR models is critical in several applications – e.g., it directly impacts on response time and throughput of Web query processing – it has received relatively little attention so far. The goal of this work is to experimentally investigate the scoring efficiency of LtR models along with their ranking quality. Specifically, we show that machine-learned ranking models exhibit a quality versus efficiency trade-off. For example, each family of LtR algorithms has tuning parameters that can influence both effectiveness and efficiency, where higher ranking quality is generally obtained with more complex and expensive models. Moreover, LtR algorithms that learn complex models, such as those based on forests of regression trees, are generally more expensive and more effective than other algorithms that induce simpler models like linear combination of features. We extensively analyze the quality versus efficiency trade-off of a wide spectrum of stateof-the-art LtR , and we propose a sound methodology to devise the most effective ranker given a time budget. To guarantee reproducibility, we used publicly available datasets and we contribute an open source C++ framework providing optimized, multi-threaded implementations of the most effective tree-based learners: Gradient Boosted Regression Trees ( GBRT ), Lambda-Mart ( λ-MART ), and the first public-domain implementation of Oblivious Lambda-Mart ( λ-MART ), an algorithm that induces forests of oblivious regression trees. We investigate how the different training parameters impact on the quality versus efficiency trade-off, and provide a thorough comparison of several algorithms in the qualitycost space. The experiments conducted show that there is not an overall best algorithm, but the optimal choice depends on the time budget. © 2016 Elsevier Ltd. All rights reserved. ∗ Corresponding author. E-mail addresses: gabriele.capannini@mdh.se (G. Capannini), claudio.lucchese@isti.cnr.it , c.lucchese@isti.cnr.it (C. Lucchese), f.nardini@isti.cnr.it (F.M. Nardini), orlando@unive.it (S. Orlando), r.perego@isti.cnr.it (R. Perego), n.tonellotto@isti.cnr.it (N. Tonellotto). http://dx.doi.org/10.1016/j.ipm.2016.05.004 0306-4573/© 2016 Elsevier Ltd. All rights reserved. Please cite this article as: G. Capannini et al., Quality versus efficiency in document scoring with learning-to-rank models, Information Processing and Management (2016), http://dx.doi.org/10.1016/j.ipm.2016.05.004 2 G. Capannini et al. / Information Processing and Management 0 0 0 (2016) 1–17 ARTICLE IN PRESS JID: IPM [m3Gsc; May 17, 2016;19:28 ] Document Index Base Ranker Top Ranker Features Learning to Rank Algorithm Query First step Second step N docs K docs 1. ............ 2. ............ 3. ............ K. ............ ... ... Results Page(s) Fig. 1. The architecture of a generic machine-learned ranking pipeline.", "title": "" }, { "docid": "1b4963cac3a0c3b0ae469f616b4295a8", "text": "The volume of traveling websites is rapidly increasing. This makes relevant information extraction more challenging. Several fuzzy ontology-based systems have been proposed to decrease the manual work of a full-text query search engine and opinion mining. However, most search engines are keyword-based, and available full-text search engine systems are still imperfect at extracting precise information using different types of user queries. In opinion mining, travelers do not declare their hotel opinions entirely but express individual feature opinions in reviews. Hotel reviews have numerous uncertainties, and most featured opinions are based on complex linguistic wording (small, big, very good and very bad). Available ontology-based systems cannot extract blurred information from reviews to provide better solutions. To solve these problems, this paper proposes a new extraction and opinion mining system based on a type-2 fuzzy ontology called T2FOBOMIE. The system reformulates the user’s full-text query to extract the user requirement and convert it into the format of a proper classical full-text search engine query. The proposed system retrieves targeted hotel reviews and extracts feature opinions from reviews using a fuzzy domain ontology. The fuzzy domain ontology, user information and hotel information are integrated to form a type-2 fuzzy merged ontology for the retrieving of feature polarity and individual hotel polarity. The Protégé OWL-2 (Ontology Web Language) tool is used to develop the type-2 fuzzy ontology. A series of experiments were designed and demonstrated that T2FOBOMIE performance is highly productive for analyzing reviews and accurate opinion mining.", "title": "" }, { "docid": "1298ddbeea84f6299e865708fd9549a6", "text": "Since its invention in the early 1960s (Rotman and Turner, 1963), the Rotman Lens has proven itself to be a useful beamformer for designers of electronically scanned arrays. Inherent in its design is a true time delay phase shift capability that is independent of frequency and removes the need for costly phase shifters to steer a beam over wide angles. The Rotman Lens has a long history in military radar, but it has also been used in communication systems. This article uses the developed software to design and analyze a microstrip Rotman Lens for the Ku band. The initial lens design will come from a tool based on geometrical optics (GO). A second stage of analysis will be performed using a full wave finite difference time domain (FDTD) solver. The results between the first-cut design tool and the comprehensive FDTD solver will be compared, and some of the design trades will be explored to gauge their impact on the performance of the lens.", "title": "" }, { "docid": "517454eb09e377bb157926e196094a2e", "text": "Wireless sensor networks are one of the emerging areas which have equipped scientists with the capability of developing real-time monitoring systems. This paper discusses the development of a wireless sensor network(WSN) to detect landslides, which includes the design, development and implementation of a WSN for real time monitoring, the development of the algorithms needed that will enable efficient data collection and data aggregation, and the network requirements of the deployed landslide detection system. The actual deployment of the testbed is in the Idukki district of the Southern state of Kerala, India, a region known for its heavy rainfall, steep slopes, and frequent landslides.", "title": "" }, { "docid": "9a3cc8e2bef4f9ecec5bf6f5111562f2", "text": "We present a study that explores the use of a commercially available eye tracker as a control device for video games. We examine its use across multiple gaming genres and present games that utilize the eye tracker in a variety of ways. First, we describe a first-person shooter that uses the eyes to control orientation. Second, we study the use of eye movements for more natural interaction with characters in a role playing game. And lastly, we examine the use of eye tracking as a means to control a modified version of the classic action/arcade game Missile Command. Our results indicate that the use of an eye tracker can increase the immersion of a video game and can significantly alter the gameplay experience.", "title": "" }, { "docid": "0daa43669ae68a81e5eb71db900976c6", "text": "Fertilizer plays an important role in maintaining soil fertility, increasing yields and improving harvest quality. However, a significant portion of fertilizers are lost, increasing agricultural cost, wasting energy and polluting the environment, which are challenges for the sustainability of modern agriculture. To meet the demands of improving yields without compromising the environment, environmentally friendly fertilizers (EFFs) have been developed. EFFs are fertilizers that can reduce environmental pollution from nutrient loss by retarding, or even controlling, the release of nutrients into soil. Most of EFFs are employed in the form of coated fertilizers. The application of degradable natural materials as a coating when amending soils is the focus of EFF research. Here, we review recent studies on materials used in EFFs and their effects on the environment. The major findings covered in this review are as follows: 1) EFF coatings can prevent urea exposure in water and soil by serving as a physical barrier, thereby reducing the urea hydrolysis rate and decreasing nitrogen oxide (NOx) and dinitrogen (N2) emissions, 2) EFFs can increase the soil organic matter content, 3) hydrogel/superabsorbent coated EFFs can buffer soil acidity or alkalinity and lead to an optimal pH for plants, and 4) hydrogel/superabsorbent coated EFFs can improve water-retention and water-holding capacity of soil. In conclusion, EFFs play an important role in enhancing nutrients efficiency and reducing environmental pollution.", "title": "" }, { "docid": "15102e561d9640ee39952e4ad62ef896", "text": "OBJECTIVE\nTo define the relative position of the maxilla and mandible in fetuses with trisomy 18 at 11 + 0 to 13 + 6 weeks of gestation.\n\n\nMETHODS\nA three-dimensional (3D) volume of the fetal head was obtained before karyotyping at 11 + 0 to 13 + 6 weeks of gestation in 36 fetuses subsequently found to have trisomy 18, and 200 chromosomally normal fetuses. The frontomaxillary facial (FMF) angle and the mandibulomaxillary facial (MMF) angle were measured in a mid-sagittal view of the fetal face.\n\n\nRESULTS\nIn the chromosomally normal group both the FMF and MMF angles decreased significantly with crown-rump length (CRL). In the trisomy 18 fetuses the FMF angle was significantly greater and the angle was above the 95(th) centile of the normal range in 21 (58.3%) cases. In contrast, in trisomy 18 fetuses the MMF angle was significantly smaller than that in normal fetuses and the angle was below the 5(th) centile of the normal range in 12 (33.3%) cases.\n\n\nCONCLUSIONS\nTrisomy 18 at 11 + 0 to 13 + 6 weeks of gestation is associated with both mid-facial hypoplasia and micrognathia or retrognathia that can be documented by measurement of the FMF angle and MMF angle, respectively.", "title": "" }, { "docid": "db5eb3eef66f26cedb6cacf5e1373403", "text": "In this article, we present a novel approach for modulating the shape of transitions between terrain materials to produce detailed and varied contours where blend resolution is limited. Whereas texture splatting and blend mapping add detail to transitions at the texel level, our approach addresses the broader shape of the transition by introducing intermittency and irregularity. Our results have proven that enriched detail of the blend contour can be achieved with a performance competitive to existing approaches without additional texture, geometry resources, or asset preprocessing. We achieve this by compositing blend masks on-the-fly with the subdivision of texture space into differently sized patches to produce irregular contours from minimal artistic input. Our approach is of particular importance for applications where GPU resources or artistic input is limited or impractical.", "title": "" }, { "docid": "2583e0ccbf65571d98e78547c8b9aeb4", "text": "The current evolution of the cyber-threat ecosystem shows that no system can be considered invulnerable. It is therefore important to quantify the risk level within a system and devise risk prediction methods such that proactive measures can be taken to reduce the damage of cyber attacks. We present RiskTeller, a system that analyzes binary file appearance logs of machines to predict which machines are at risk of infection months in advance. Risk prediction models are built by creating, for each machine, a comprehensive profile capturing its usage patterns, and then associating each profile to a risk level through both fully and semi-supervised learning methods. We evaluate RiskTeller on a year-long dataset containing information about all the binaries appearing on machines of 18 enterprises. We show that RiskTeller can use the machine profile computed for a given machine to predict subsequent infections with the highest prediction precision achieved to date.", "title": "" }, { "docid": "113c07908c1f22c7671553c7f28c0b3f", "text": "Nearly 80% of children in the United States have at least 1 sibling, indicating that the birth of a baby sibling is a normative ecological transition for most children. Many clinicians and theoreticians believe the transition is stressful, constituting a developmental crisis for most children. Yet, a comprehensive review of the empirical literature on children's adjustment over the transition to siblinghood (TTS) has not been done for several decades. The current review summarizes research examining change in first borns' adjustment to determine whether there is evidence that the TTS is disruptive for most children. Thirty studies addressing the TTS were found, and of those studies, the evidence did not support a crisis model of developmental transitions, nor was there overwhelming evidence of consistent changes in firstborn adjustment. Although there were decreases in children's affection and responsiveness toward mothers, the results were more equivocal for many other behaviors (e.g., sleep problems, anxiety, aggression, regression). An inspection of the scientific literature indicated there are large individual differences in children's adjustment and that the TTS can be a time of disruption, an occasion for developmental advances, or a period of quiescence with no noticeable changes. The TTS may be a developmental turning point for some children that portends future psychopathology or growth depending on the transactions between children and the changes in the ecological context over time. A developmental ecological systems framework guided the discussion of how child, parent, and contextual factors may contribute to the prediction of firstborn children's successful adaptation to the birth of a sibling.", "title": "" }, { "docid": "e3104e5311dee57067540869f8036ba9", "text": "Direct-touch interaction on mobile phones revolves around screens that compete for visual attention with users' real-world tasks and activities. This paper investigates the impact of these situational impairments on touch-screen interaction. We probe several design factors for touch-screen gestures, under various levels of environmental demands on attention, in comparison to the status-quo approach of soft buttons. We find that in the presence of environmental distractions, gestures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons when the user's attention is focused on the phone. In fact, the speed and accuracy of bezel gestures did not appear to be significantly affected by environment, and some gestures could be articulated eyes-free, with one hand. Bezel-initiated gestures offered the fastest performance, and mark-based gestures were the most accurate. Bezel-initiated marks therefore may offer a promising approach for mobile touch-screen interaction that is less demanding of the user's attention.", "title": "" }, { "docid": "df6a26b68ebc49f6cc0792ede3d8266f", "text": "Nested Chinese Restaurant Process (nCRP) topic models are powerful nonparametric Bayesian methods to extract a topic hierarchy from a given text corpus, where the hierarchical structure is automatically determined by the data. Hierarchical Latent Dirichlet Allocation (hLDA) is a popular instance of nCRP topic models. However, hLDA has only been evaluated at small scale, because the existing collapsed Gibbs sampling and instantiated weight variational inference algorithms either are not scalable or sacri€ce inference quality with mean-€eld assumptions. Moreover, an ef€cient distributed implementation of the data structures, such as dynamically growing count matrices and trees, is challenging. In this paper, we propose a novel partially collapsed Gibbs sampling (PCGS) algorithm, which combines the advantages of collapsed and instantiated weight algorithms to achieve good scalability as well as high model quality. An initialization strategy is presented to further improve the model quality. Finally, we propose an ecient distributed implementation of PCGS through vectorization, pre-processing, and a careful design of the concurrent data structures and communication strategy. Empirical studies show that our algorithm is 111 times more ecient than the previous open-source implementation for hLDA, with comparable or even beŠer model quality. Our distributed implementation can extract 1,722 topics from a 131-million-document corpus with 28 billion tokens, which is 4-5 orders of magnitude larger than the previous largest corpus, with 50 machines in 7 hours.", "title": "" }, { "docid": "1ebb46b4c9e32423417287ab26cae14b", "text": "Two field studies explored the relationship between self-awareness and transgressive behavior. In the first study, 363 Halloween trick-or-treaters were instructed to only take one candy. Self-awareness induced by the presence of a mirror placed behind the candy bowl decreased transgression rates for children who had been individuated by asking them their name and address, but did not affect the behavior of children left anonymous. Self-awareness influenced older but not younger children. Naturally occurring standards instituted by the behavior of the first child to approach the candy bowl in each group were shown to interact with the experimenter's verbally stated standard. The behavior of 349 subjects in the second study replicated the findings in the first study. Additionally, when no standard was stated by the experimenter, children took more candy when not self-aware than when self-aware.", "title": "" }, { "docid": "a0a28f85247279d63a5b5f1189818f2c", "text": "In this paper, we rigorously study tractable models for provably recovering low-rank tensors. Unlike their matrix-based predecessors, current convex approaches for recovering low-rank tensors based on incomplete (tensor completion) and/or grossly corrupted (tensor robust principal analysis) observations still suffer from the lack of theoretical guarantees, although they have been used in various recent applications and have exhibited promising empirical performance. In this work, we attempt to fill this gap. Specifically, we propose a class of convex recovery models (including strongly convex programs) that can be proved to guarantee exact recovery under a set of new tensor incoherence conditions which only require the existence of one low-rank mode, and characterize the problems where our models tend to perform well.", "title": "" }, { "docid": "d7594a6e11835ac94ee40e5d69632890", "text": "(CLUES) is an advanced, automated mortgageunderwriting rule-based expert system. The system was developed to increase the production capacity and productivity of Countrywide branches, improve the consistency of underwriting, and reduce the cost of originating a loan. The system receives selected information from the loan application, credit report, and appraisal. It then decides whether the loan should be approved or whether it requires further review by a human underwriter. If the system approves the loan, no further review is required, and the application is funded. CLUES has been in operation since February 1993 and is currently processing more than 8500 loans each month in over 300 decentralized branches around the country.", "title": "" } ]
scidocsrr
1616d9820e6a65a060b577fc5f486c03
Energy Cloud: Real-Time Cloud-Native Energy Management System to Monitor and Analyze Energy Consumption in Multiple Industrial Sites
[ { "docid": "a44b74738723580f4056310d6856bb74", "text": "This book covers the theory and principles of core avionic systems in civil and military aircraft, including displays, data entry and control systems, fly by wire control systems, inertial sensor and air data systems, navigation, autopilot systems an... Use the latest data mining best practices to enable timely, actionable, evidence-based decision making throughout your organization! Real-World Data Mining demystifies current best practices, showing how to use data mining to uncover hidden patterns ... Data Warehousing in the Age of the Big Data will help you and your organization make the most of unstructured data with your existing data warehouse. As Big Data continues to revolutionize how we use data, it doesn't have to create more confusion. Ex... This book explores the concepts of data mining and data warehousing, a promising and flourishing frontier in data base systems and new data base applications and is also designed to give a broad, yet ....", "title": "" }, { "docid": "bc8fe59fbfafebaa3c104e35acd632a2", "text": "In our Big Data era, data is being generated, collected and analyzed at an unprecedented scale, and data-driven decision making is sweeping through all aspects of society. Recent studies have shown that poor quality data is prevalent in large databases and on the Web. Since poor quality data can have serious consequences on the results of data analyses, the importance of veracity, the fourth `V' of big data is increasingly being recognized. In this tutorial, we highlight the substantial challenges that the first three `V's, volume, velocity and variety, bring to dealing with veracity in big data. Due to the sheer volume and velocity of data, one needs to understand and (possibly) repair erroneous data in a scalable and timely manner. With the variety of data, often from a diversity of sources, data quality rules cannot be specified a priori; one needs to let the “data to speak for itself” in order to discover the semantics of the data. This tutorial presents recent results that are relevant to big data quality management, focusing on the two major dimensions of (i) discovering quality issues from the data itself, and (ii) trading-off accuracy vs efficiency, and identifies a range of open problems for the community.", "title": "" } ]
[ { "docid": "b21ae248eea30b91e41012ab70cb6d81", "text": "Communication technology plays an increasingly important role in the growing automated metering infrastructure (AMI) market. This paper presents a thorough analysis and comparison of four application layer protocols in the smart metering context. The inspected protocols are DLMS/COSEM, the Smart Message Language (SML), and the MMS and SOAP mappings of IEC 61850. The focus of this paper is on their use over TCP/IP. The protocols are first compared with respect to qualitative criteria such as the ability to transmit clock synchronization information. Afterwards the message size of meter reading requests and responses and the different binary encodings of the protocols are compared.", "title": "" }, { "docid": "85b99b2c7b209f41b539b0d1041742fd", "text": "Depth maps, characterizing per-pixel physical distance between objects in a 3D scene and a capturing camera, can now be readily acquired using inexpensive active sensors such as Microsoft Kinect. However, the acquired depth maps are often corrupted due to surface reflection or sensor noise. In this paper, we build on two previously developed works in the image denoising literature to restore single depth maps-i.e., to jointly exploit local smoothness and nonlocal self-similarity of a depth map. Specifically, we propose to first cluster similar patches in a depth image and compute an average patch, from which we deduce a graph describing correlations among adjacent pixels. Then we transform similar patches to the same graph-based transform (GBT) domain, where the GBT basis vectors are learned from the derived correlation graph. Finally, we perform an iterative thresholding procedure in the GBT domain to enforce group sparsity. Experimental results show that for single depth maps corrupted with additive white Gaussian noise (AWGN), our proposed NLGBT denoising algorithm can outperform state-of-the-art image denoising methods such as BM3D by up to 2.37dB in terms of PSNR.", "title": "" }, { "docid": "7e4e5472e5ee0b25511975f3422d2173", "text": "Most people with Parkinson's disease (PD) fall and many experience recurrent falls. The aim of this review was to examine the scope of recurrent falls and to identify factors associated with recurrent fallers. A database search for journal articles which reported prospectively collected information concerning recurrent falls in people with PD identified 22 studies. In these studies, 60.5% (range 35 to 90%) of participants reported at least one fall, with 39% (range 18 to 65%) reporting recurrent falls. Recurrent fallers reported an average of 4.7 to 67.6 falls per person per year (overall average 20.8 falls). Factors associated with recurrent falls include: a positive fall history, increased disease severity and duration, increased motor impairment, treatment with dopamine agonists, increased levodopa dosage, cognitive impairment, fear of falling, freezing of gait, impaired mobility and reduced physical activity. The wide range in the frequency of recurrent falls experienced by people with PD suggests that it would be beneficial to classify recurrent fallers into sub-groups based on fall frequency. Given that there are several factors particularly associated with recurrent falls, fall management and prevention strategies specifically targeting recurrent fallers require urgent evaluation in order to inform clinical practice.", "title": "" }, { "docid": "826e54e8e46dcea0451b53645e679d55", "text": "Microtia is a congenital disease with various degrees of severity, ranging from the presence of rudimentary and malformed vestigial structures to the total absence of the ear (anotia). The complex anatomy of the external ear and the necessity to provide good projection and symmetry make this reconstruction particularly difficult. The aim of this work is to report our surgical technique of microtic ear correction and to analyse the short and long term results. From 2000 to 2013, 210 patients affected by microtia were treated at the Maxillo-Facial Surgery Division, Head and Neck Department, University Hospital of Parma. The patient population consisted of 95 women and 115 men, aged from 7 to 49 years. A total of 225 reconstructions have been performed in two surgical stages basing of Firmin's technique with some modifications and refinements. The first stage consists in fabrication and grafting of a three-dimensional costal cartilage framework. The second stage is performed 5-6 months later: the reconstructed ear is raised up and an additional cartilaginous graft is used to increase its projection. A mastoid fascial flap together with a skin graft are then used to protect the cartilage graft. All reconstructions were performed without any major complication. The results have been considered satisfactory by all patients starting from the first surgical step. Low morbidity, the good results obtained and a high rate of patient satisfaction make our protocol an optimal choice for treatment of microtia. The surgeon's experience and postoperative patient care must be considered as essential aspects of treatment.", "title": "" }, { "docid": "20bb0dc721040ae7d21dd9027a7a3cd4", "text": "The advent of cloud computing (CC) in recent years has attracted substantial interest from various institutions, especially higher education institutions, which wish to consider the advantages of its features. Many universities have migrated from traditional forms of teaching to electronic learning services, and they rely upon information and communication technology services. The usage of CC in educational environments provides many benefits, such as low-cost services for academics and students. The expanded use of CC comes with significant adoption challenges. Understanding the position of higher education institutions with respect to CC adoption is an essential research area. This paper investigated the current state of CC adoption in the higher education sector in order to enrich the research in this area of interest. Existing limitations and knowledge gaps in current empirical studies are identified. Moreover, suggested areas for further researches will be highlighted for the benefit of other researchers who are interesting in this topic. These researches encourage institutions of education especially in higher education to adopted cloud computing technology. Keywords—Cloud computing; education system; e-learning; information and communication technology (ICT)", "title": "" }, { "docid": "1963b3b1326fa4ed99ef39c9aaab0719", "text": "We take an ecological approach to studying social media use and its relation to mood among college students. We conducted a mixed-methods study of computer and phone logging with daily surveys and interviews to track college students' use of social media during all waking hours over seven days. Continual and infrequent checkers show different preferences of social media sites. Age differences also were found. Lower classmen tend to be heavier users and to primarily use Facebook, while upper classmen use social media less frequently and utilize sites other than Facebook more often. Factor analysis reveals that social media use clusters into patterns of content-sharing, text-based entertainment/discussion, relationships, and video consumption. The more constantly one checks social media daily, the less positive is one's mood. Our results suggest that students construct their own patterns of social media usage to meet their changing needs in their environment. The findings can inform further investigation into social media use as a benefit and/or distraction for students.", "title": "" }, { "docid": "4e5d2a871ea1cfed7188207b709766a5", "text": "key elements of orthodontic diagnosis and treatment planning over the last decade.1-3 Recent advances in technology now permit the clinician to measure dynamic lip-tooth relationships and incorporate that information into the orthodontic problem list and biomechanical plan. Digital videography is particularly useful in both smile analysis and in doctor/patient communication. Smile design is a multifactorial process, with clinical success determined by an understanding of the patient’s soft-tissue treatment limitations and the extent to which orthodontics or multidisciplinary treatment can satisfy the patient’s and orthodontist’s esthetic goals.", "title": "" }, { "docid": "9f21af3bc0955dcd9a05898f943f54ad", "text": "Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intraand inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.", "title": "" }, { "docid": "4731a95b14335a84f27993666b192bba", "text": "Blockchain has been applied to study data privacy and network security recently. In this paper, we propose a punishment scheme based on the action record on the blockchain to suppress the attack motivation of the edge servers and the mobile devices in the edge network. The interactions between a mobile device and an edge server are formulated as a blockchain security game, in which the mobile device sends a request to the server to obtain real-time service or launches attacks against the server for illegal security gains, and the server chooses to perform the request from the device or attack it. The Nash equilibria (NEs) of the game are derived and the conditions that each NE exists are provided to disclose how the punishment scheme impacts the adversary behaviors of the mobile device and the edge server.", "title": "" }, { "docid": "40d4716214b80ff944c552dfee09f5ec", "text": "Since the appearance of Android, its permission system was central to many studies of Android security. For a long time, the description of the architecture provided by Enck et al. in [31] was immutably used in various research papers. The introduction of highly anticipated runtime permissions in Android 6.0 forced us to reconsider this model. To our surprise, the permission system evolved with almost every release. After analysis of 16 Android versions, we can confirm that the modifications, especially introduced in Android 6.0, considerably impact the aptness of old conclusions and tools for newer releases. For instance, since Android 6.0 some signature permissions, previously granted only to apps signed with a platform certificate, can be granted to third-party apps even if they are signed with a non-platform certificate; many permissions considered before as threatening are now granted by default. In this paper, we review in detail the updated system, introduced changes, and their security implications. We highlight some bizarre behaviors, which may be of interest for developers and security researchers. We also found a number of bugs during our analysis, and provided patches to AOSP where possible.", "title": "" }, { "docid": "2de4de4a7b612fd8d87a40780acdd591", "text": "In the past decade, advances in speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture; in terms of both data structures and algorithms. We discuss how vertically fragmented data structures optimize cache performance on sequential data access. We then focus on equi-join, typically a random-access operation, and introduce radix algorithms for partitioned hash-join. The performance of these algorithms is quantified using a detailed analytical model that incorporates memory access cost. Experiments that validate this model were performed on the Monet database system. We obtained exact statistics on events like TLB misses, L1 and L2 cache misses, by using hardware performance counters found in modern CPUs. Using our cost model, we show how the carefully tuned memory access pattern of our radix algorithms make them perform well, which is confirmed by experimental results. *This work was carried out when the author was at the University of Amsterdam, supported by SION grant 612-23-431 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999.", "title": "" }, { "docid": "9464f2e308b5c8ab1f2fac1c008042c0", "text": "Data governance has become a significant approach that drives decision making in public organisations. Thus, the loss of data governance is a concern to decision makers, acting as a barrier to achieving their business plans in many countries and also influencing both operational and strategic decisions. The adoption of cloud computing is a recent trend in public sector organisations, that are looking to move their data into the cloud environment. The literature shows that data governance is one of the main concerns of decision makers who are considering adopting cloud computing; it also shows that data governance in general and for cloud computing in particular is still being researched and requires more attention from researchers. However, in the absence of a cloud data governance framework, this paper seeks to develop a conceptual framework for cloud data governance-driven decision making in the public sector.", "title": "" }, { "docid": "d97df185799408ae61ce2d210deec6e2", "text": "In e-commerce websites like Taobao, brand is playing a more important role in influencing users’ decision of click/purchase, partly because users are now attaching more importance to the quality of products and brand is an indicator of quality. However, existing ranking systems are not specifically designed to satisfy this kind of demand. Some design tricks may partially alleviate this problem, but still cannot provide satisfactory results or may create additional interaction cost. In this paper, we design the first brand-level ranking system to address this problem. The key challenge of this system is how to sufficiently exploit users’ rich behavior in e-commerce websites to rank the brands. In our solution, we firstly conduct the feature engineering specifically tailored for the personalized brand ranking problem and then rank the brands by an adapted Attention-GRU model containing three important modifications. Note that our proposed modifications can also apply to many other machine learning models on various tasks. We conduct a series of experiments to evaluate the effectiveness of our proposed ranking model and test the response to the brand-level ranking system from real users on a large-scale e-commerce platform, i.e. Taobao.", "title": "" }, { "docid": "2526915745dda9026836347292f79d12", "text": "I show that a functional representation of self-similarity (as the one occurring in fractals) is provided by squeezed coherent states. In this way, the dissipative model of brain is shown to account for the self-similarity in brain background activity suggested by power-law distributions of power spectral densities of electrocorticograms. I also briefly discuss the action-perception cycle in the dissipative model with reference to intentionality in terms of trajectories in the memory state space.", "title": "" }, { "docid": "917154ffa5d9108fd07782d1c9a183ba", "text": "Recommender systems for automatically suggested items of interest to users have become increasingly essential in fields where mass personalization is highly valued. The popular core techniques of such systems are collaborative filtering, content-based filtering and combinations of these. In this paper, we discuss hybrid approaches, using collaborative and also content data to address cold-start - that is, giving recommendations to novel users who have no preference on any items, or recommending items that no user of the community has seen yet. While there have been lots of studies on solving the item-side problems, solution for user-side problems has not been seen public. So we develop a hybrid model based on the analysis of two probabilistic aspect models using pure collaborative filtering to combine with users' information. The experiments with MovieLen data indicate substantial and consistent improvements of this model in overcoming the cold-start user-side problem.", "title": "" }, { "docid": "6f0ebd6314cd5c012f791d0e5c448045", "text": "This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.", "title": "" }, { "docid": "703f0baf67a1de0dfb03b3192327c4cf", "text": "Fleet management systems are commonly used to coordinate mobility and delivery services in a broad variety of domains. However, their traditional top-down control architecture becomes a bottleneck in open and dynamic environments, where scalability, proactiveness, and autonomy are becoming key factors for their success. Here, the authors present an abstract event-based architecture for fleet management systems that supports tailoring dynamic control regimes for coordinating fleet vehicles, and illustrate it for the case of medical emergency management. Then, they go one step ahead in the transition toward automatic or driverless fleets, by conceiving fleet management systems in terms of cyber-physical systems, and putting forward the notion of cyber fleets.", "title": "" }, { "docid": "2fbd1b2e25473affb40990195b26a88b", "text": "In this paper we considerably improve on a state-of-the-art alpha matting approach by incorporating a new prior which is based on the image formation process. In particular, we model the prior probability of an alpha matte as the convolution of a high-resolution binary segmentation with the spatially varying point spread function (PSF) of the camera. Our main contribution is a new and efficient de-convolution approach that recovers the prior model, given an approximate alpha matte. By assuming that the PSF is a kernel with a single peak, we are able to recover the binary segmentation with an MRF-based approach, which exploits flux and a new way of enforcing connectivity. The spatially varying PSF is obtained via a partitioning of the image into regions of similar defocus. Incorporating our new prior model into a state-of-the-art matting technique produces results that outperform all competitors, which we confirm using a publicly available benchmark.", "title": "" }, { "docid": "eb20856f797f35ea6eb05f4646e54f34", "text": "Malware in smartphones is growing at a signi cant rate. There are currently more than 250 million smartphone users in the world and this number is expected to grow in coming years [44]. In the past few years, smartphones have evolved from simple mobile phones into sophisticated computers. This evolution has enabled smartphone users to access and browse the Internet, to receive and send emails, SMS and MMS messages and to connect devices in order to exchange information. All of these features make the smartphone a useful tool in our daily lives, but at the same time they render it more vulnerable to attacks by malicious applications. Given that most users store sensitive information on their mobile phones, such as phone numbers, SMS messages, emails, pictures and videos, smartphones are a very appealing target for attackers and malware developers. The need to maintain security and data con dentiality on the Android platform makes the analysis of malware on this platform an urgent issue. We have based this report on previous approaches to the dynamic analysis of application behavior, and have adapted one approach in order to detect malware on the Android platform. The detector is embedded in a framework to collect traces from a number of real users and is based on crowdsourcing. Our framework has been tested by analyzing data collected at the central server using two types of data sets: data from arti cial malware created for test purposes and data from real malware found in the wild. The method used is shown to be an e ective means of isolating malware and alerting users of downloaded malware, which suggests that it has great potential for helping to stop the spread of detected malware to a larger community. Finally, the report will give a complete review of results for self written and real Android Malware applications that have been tested with the system. This thesis project shows that it is feasible to create an Android malware detection system with satisfactory results.", "title": "" }, { "docid": "a8abc8da0f2d5f8055c4ed6ea2294c6c", "text": "This paper presents the design of a modulated metasurface (MTS) antenna capable to provide both right-hand (RH) and left-hand (LH) circularly polarized (CP) boresight radiation at Ku-band (13.5 GHz). This antenna is based on the interaction of two cylindrical-wavefront surface wave (SW) modes of transverse electric (TE) and transverse magnetic (TM) types with a rotationally symmetric, anisotropic-modulated MTS placed on top of a grounded slab. A properly designed centered circular waveguide feed excites the two orthogonal (decoupled) SW modes and guarantees the balance of the power associated with each of them. By a proper selection of the anisotropy and modulation of the MTS pattern, the phase velocities of the two modes are synchronized, and leakage is generated in broadside direction with two orthogonal linear polarizations. When the circular waveguide is excited with two mutually orthogonal TE11 modes in phase-quadrature, an LHCP or RHCP antenna is obtained. This paper explains the feeding system and the MTS requirements that guarantee the balanced conditions of the TM/TE SWs and consequent generation of dual CP boresight radiation.", "title": "" } ]
scidocsrr
b8c16bf86e4334e0a9b5e9a53c883285
A Convex Formulation for Learning Task Relationships in Multi-Task Learning
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" } ]
[ { "docid": "1f45d589a42815614d48d20b4ca4abb6", "text": "The modification of the conventional helical antenna by two pitch angles and a truncated cone reflector was analyzed. Limits of the axial radiation mode were examined by criteria defined with axial ratio, HPBW and SLL of the antenna. Gain increase was achieved but the bandwidth of the axial radiation mode remained almost the same. The practical adjustment was made on helical antenna with dielectric cylinder and measured in a laboratory. The measurement results confirmed the improvement of the conventional antenna in terms of gain increase.", "title": "" }, { "docid": "4818794eddc8af63fd99b000bd00736a", "text": "Dysproteinemia is characterized by the overproduction of an Ig by clonal expansion of cells from the B cell lineage. The resultant monoclonal protein can be composed of the entire Ig or its components. Monoclonal proteins are increasingly recognized as a contributor to kidney disease. They can cause injury in all areas of the kidney, including the glomerular, tubular, and vascular compartments. In the glomerulus, the major mechanism of injury is deposition. Examples of this include Ig amyloidosis, monoclonal Ig deposition disease, immunotactoid glomerulopathy, and cryoglobulinemic GN specifically from types 1 and 2 cryoglobulins. Mechanisms that do not involve Ig deposition include the activation of the complement system, which causes complement deposition in C3 glomerulopathy, and cytokines/growth factors as seen in thrombotic microangiopathy and precipitation, which is involved with cryoglobulinemia. It is important to recognize that nephrotoxic monoclonal proteins can be produced by clones from any of the B cell lineages and that a malignant state is not required for the development of kidney disease. The nephrotoxic clones that do not meet requirement for a malignant condition are now called monoclonal gammopathy of renal significance. Whether it is a malignancy or monoclonal gammopathy of renal significance, preservation of renal function requires substantial reduction of the monoclonal protein. With better understanding of the pathogenesis, clone-directed strategies, such as rituximab against CD20 expressing B cell and bortezomib against plasma cell clones, have been used in the treatment of these diseases. These clone-directed therapies been found to be more effective than immunosuppressive regimens used in nonmonoclonal protein-related kidney diseases.", "title": "" }, { "docid": "0418d5ce9f15a91aeaacd65c683f529d", "text": "We propose a novel cancelable biometric approach, known as PalmHashing, to solve the non-revocable biometric proposed method hashes palmprint templates with a set of pseudo-random keys to obtain a unique code called palmhash. The palmhash code can be stored in portable devices such tokens and smartcards for verification. Multiple sets of palmha can be maintained in multiple applications. Thus the privacy and security of the applications can be greatly enhance compromised, revocation can also be achieved via direct replacement of a new set of palmhash code. In addition, PalmHashin offers several advantages over contemporary biometric approaches such as clear separation of the genuine-imposter and zero EER occurrences. In this paper, we outline the implementation details of this method and also highlight its p in security-critical applications.  2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "85b885986958b388b7fda7ca2426a583", "text": "To reduce the risk of catheter-associated urinary tract infection (CAUTI), limiting use of indwelling catheters is encouraged with alternative collection methods and early removal. Adverse effects associated with such practices have not been described. We also determined if CAUTI preventative measures increase the risk of catheter-related complications. We hypothesized that there are complications associated with early removal of indwelling catheters. We described complications associated with indwelling catheterization and intermittent catheterization, and compared complication rates before and after policy updates changed catheterization practices. We performed retrospective cohort analysis of trauma patients admitted between August 1, 2009, and December 31, 2013 who required indwelling catheter. Associations between catheter days and adverse outcomes such as infection, bladder overdistention injury, recatheterization, urinary retention, and patients discharged with indwelling catheter were evaluated. The incidence of CAUTI and the total number of catheter days pre and post policy change were similar. The incidence rate of urinary retention and associated complications has increased since the policy changed. Practices intended to reduce the CAUTI rate are associated with unintended complications, such as urinary retention. Patient safety and quality improvement programs should monitor all complications associated with urinary catheterization practices, not just those that represent financial penalties.", "title": "" }, { "docid": "afeb909f4be9da56dcaeb86d464ec75e", "text": "Synthesizing expressive speech with appropriate prosodic variations, e.g., various styles, still has much room for improvement. Previous methods have explored to use manual annotations as conditioning attributes to provide variation information. However, the related training data are expensive to obtain and the annotated style codes can be ambiguous and unreliable. In this paper, we explore utilizing the residual error as conditioning attributes. The residual error is the difference between the prediction of a trained average model and the ground truth. We encode the residual error into a style embedding via a neural networkbased error encoder. The style embedding is then fed to the target synthesis model to provide information for modeling various style distributions more accurately. The average model and the error encoder are jointly optimized with the target synthesis model. Our proposed method has two advantages: 1) the embedding is automatically learned with no need of manual style annotations, which helps overcome data sparsity and ambiguity limitations; 2) For any unseen audio utterance, the style embedding can be efficiently generated. This enables rapid adaptation to the desired style to be achieved with only a single adaptation utterance. Experimental results show that our proposed method outperforms the baseline model in both speech quality and style similarity.", "title": "" }, { "docid": "ece9554b3cb94a4cedd12d5659c8fe0d", "text": "In many real-world network datasets such as co-authorship, co-citation, email communication, etc., relationships are complex and go beyond pairwise. Hypergraphs provide a flexible and natural modeling tool to model such complex relationships. The obvious existence of such complex relationships in many real-world networks naturally motivates the problem of learning with hypergraphs. A popular learning paradigm is hypergraph-based semi-supervised learning (SSL) where the goal is to assign labels to initially unlabelled vertices in a hypergraph. Motivated by the fact that a graph convolutional network (GCN) has been effective for graph-based SSL, we propose HyperGCN, a novel GCN for SSL on attributed hypergraphs. Additionally, we show how HyperGCN can be used as a learning-based approach for combinatorial optimisation on NP-hard hypergraph problems. We demonstrate HyperGCN’s effectiveness through detailed experimentation on real-world hypergraphs. We have made HyperGCN’s source code available to foster reproducible research.", "title": "" }, { "docid": "803b681a89e6f3db34061c4b26fc2cd5", "text": "T cells redirected to specific antigen targets with engineered chimeric antigen receptors (CARs) are emerging as powerful therapies in hematologic malignancies. Various CAR designs, manufacturing processes, and study populations, among other variables, have been tested and reported in over 10 clinical trials. Here, we review and compare the results of the reported clinical trials and discuss the progress and key emerging factors that may play a role in effecting tumor responses. We also discuss the outlook for CAR T-cell therapies, including managing toxicities and expanding the availability of personalized cell therapy as a promising approach to all hematologic malignancies. Many questions remain in the field of CAR T cells directed to hematologic malignancies, but the encouraging response rates pave a wide road for future investigation.", "title": "" }, { "docid": "fb1f467ab11bb4c01a9e410bf84ac258", "text": "The modular arrangement of the neocortex is based on the cell minicolumn: a self-contained ecosystem of neurons and their afferent, efferent, and interneuronal connections. The authors' preliminary studies indicate that minicolumns in the brains of autistic patients are narrower, with an altered internal organization. More specifically, their minicolumns reveal less peripheral neuropil space and increased spacing among their constituent cells. The peripheral neuropil space of the minicolumn is the conduit, among other things, for inhibitory local circuit projections. A defect in these GABAergic fibers may correlate with the increased prevalence of seizures among autistic patients. This article expands on our initial findings by arguing for the specificity of GABAergic inhibition in the neocortex as being focused around its mini- and macrocolumnar organization. The authors conclude that GABAergic interneurons are vital to proper minicolumnar differentiation and signal processing (e.g., filtering capacity of the neocortex), thus providing a putative correlate to autistic symptomatology.", "title": "" }, { "docid": "252256527c17c21492e4de0ae50d9729", "text": "Scribbles in scribble-based interactive segmentation such as graph-cut are usually assumed to be perfectly accurate, i.e., foreground scribble pixels will never be segmented as background in the final segmentation. However, it can be hard to draw perfectly accurate scribbles, especially on fine structures of the image or on mobile touch-screen devices. In this paper, we propose a novel ratio energy function that tolerates errors in the user input while encouraging maximum use of the user input information. More specifically, the ratio energy aims to minimize the graph-cut energy while maximizing the user input respected in the segmentation. The ratio energy function can be exactly optimized using an efficient iterated graph cut algorithm. The robustness of the proposed method is validated on the GrabCut dataset using both synthetic scribbles and manual scribbles. The experimental results show that the proposed algorithm is robust to the errors in the user input and preserves the \"anchoring\" capability of the user input.", "title": "" }, { "docid": "95a038d92ed94e7a1cefdfab1db18c1d", "text": "Arcing in PV systems has caused multiple residential and commercial rooftop fires. The National Electrical Code® (NEC) added section 690.11 to mitigate this danger by requiring arc-fault circuit interrupters (AFCI). Currently, the requirement is only for series arc-faults, but to fully protect PV installations from arc-fault-generated fires, parallel arc-faults must also be mitigated effectively. In order to de-energize a parallel arc-fault without module-level disconnects, the type of arc-fault must be identified so that proper action can be taken (e.g., opening the array for a series arc-fault and shorting for a parallel arc-fault). In this work, we investigate the electrical behavior of the PV system during series and parallel arc-faults to (a) understand the arcing power available from different faults, (b) identify electrical characteristics that differentiate the two fault types, and (c) determine the location of the fault based on current or voltage of the faulted array. This information can be used to improve arc-fault detector speed and functionality.", "title": "" }, { "docid": "332bcd9b49f3551d8f07e4f21a881804", "text": "Attention plays a critical role in effective learning. By means of attention assessment, it helps learners improve and review their learning processes, and even discover Attention Deficit Hyperactivity Disorder (ADHD). Hence, this work employs modified smart glasses which have an inward facing camera for eye tracking, and an inertial measurement unit for head pose estimation. The proposed attention estimation system consists of eye movement detection, head pose estimation, and machine learning. In eye movement detection, the central point of the iris is found by the locally maximum curve via the Hough transform where the region of interest is derived by the identified left and right eye corners. The head pose estimation is based on the captured inertial data to generate physical features for machine learning. Here, the machine learning adopts Genetic Algorithm (GA)-Support Vector Machine (SVM) where the feature selection of Sequential Floating Forward Selection (SFFS) is employed to determine adequate features, and GA is to optimize the parameters of SVM. Our experiments reveal that the proposed attention estimation system can achieve the accuracy of 93.1% which is fairly good as compared to the conventional systems. Therefore, the proposed system embedded in smart glasses brings users mobile, convenient, and comfortable to assess their attention on learning or medical symptom checker.", "title": "" }, { "docid": "1f8b3933dc49d87204ba934f82f2f84f", "text": "While journalism is evolving toward a rather open-minded participatory paradigm, social media presents overwhelming streams of data that make it difficult to identify the information of a journalist's interest. Given the increasing interest of journalists in broadening and democratizing news by incorporating social media sources, we have developed TweetGathering, a prototype tool that provides curated and contextualized access to news stories on Twitter. This tool was built with the aim of assisting journalists both with gathering and with researching news stories as users comment on them. Five journalism professionals who tested the tool found helpful characteristics that could assist them with gathering additional facts on breaking news, as well as facilitating discovery of potential information sources such as witnesses in the geographical locations of news.", "title": "" }, { "docid": "cf9fe52efd734c536d0a7daaf59a9bcd", "text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.", "title": "" }, { "docid": "5953dafaebde90a0f6af717883452d08", "text": "Compact high-voltage Marx generators have found wide ranging applications for driving resistive and capacitive loads. Parasitic or leakage capacitance in compact low-energy Marx systems has proved useful in driving resistive loads, but it can be detrimental when driving capacitive loads where it limits the efficiency of energy transfer to the load capacitance. In this paper, we show how manipulating network designs consisting of these parasitic elements along with internal and external components can optimize the performance of such systems.", "title": "" }, { "docid": "ebd40aaf7fa87beec30ceba483cc5047", "text": "Event Detection (ED) aims to identify instances of specified types of events in text, which is a crucial component in the overall task of event extraction. The commonly used features consist of lexical, syntactic, and entity information, but the knowledge encoded in the Abstract Meaning Representation (AMR) has not been utilized in this task. AMR is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph. In this paper, we demonstrate the effectiveness of AMR to capture and represent the deeper semantic contexts of the trigger words in this task. Experimental results further show that adding AMR features on top of the traditional features can achieve 67.8% (with 2.1% absolute improvement) F-measure (F1), which is comparable to the state-of-the-art approaches.", "title": "" }, { "docid": "6a4844bf755830d14fb24caff1aa8442", "text": "We present a stochastic first-order optimization algorithm, named BCSC, that adds a cyclic constraint to stochastic block-coordinate descent. It uses different subsets of the data to update different subsets of the parameters, thus limiting the detrimental effect of outliers in the training set. Empirical tests in benchmark datasets show that our algorithm outperforms state-of-the-art optimization methods in both accuracy as well as convergence speed. The improvements are consistent across different architectures, and can be combined with other training techniques and regularization methods.", "title": "" }, { "docid": "7cd992aec08167cb16ea1192a511f9aa", "text": "In this thesis, we will present an Echo State Network (ESN) to investigate hierarchical cognitive control, one of the functions of Prefrontal Cortex (PFC). This ESN is designed with the intention to implement it as a robot controller, making it useful for biologically inspired robot control and for embodied and embedded PFC research. We will apply the ESN to a n-back task and a Wisconsin Card Sorting task to confirm the hypothesis that topological mapping of temporal and policy abstraction over the PFC can be explained by the effects of two requirements: a better preservation of information when information is processed in different areas, versus a better integration of information when information is processed in a single area.", "title": "" }, { "docid": "0178f7e0f0db3dac510a8b8a94767f34", "text": "We propose a novel method of regularization for recurrent neural networks called suprisal-driven zoneout. In this method, states zoneout (maintain their previous value rather than updating), when the suprisal (discrepancy between the last state’s prediction and target) is small. Thus regularization is adaptive and input-driven on a per-neuron basis. We demonstrate the effectiveness of this idea by achieving state-of-the-art bits per character of 1.31 on the Hutter Prize Wikipedia dataset, significantly reducing the gap to the best known highly-engineered compression methods.", "title": "" }, { "docid": "ef02508d3d05cdda0b1b39b53f3820ec", "text": "In natural language generation, a meaning representation of some kind is successively transformed into a sentence or a text. Naturally, a central subtask of this problem is the choice of words, orlexicalization. In this paper, we propose four major issues that determine how a generator tackles lexicalization, and survey the contributions that researchers have made to them. Open problems are identified, and a possible direction for future research is sketched.", "title": "" }, { "docid": "f02bd91e8374506aa4f8a2107f9545e6", "text": "In an online survey with two cohorts (2009 and 2011) of undergraduates in dating relationshi ps, we examined how attachment was related to communication technology use within romantic relation ships. Participants reported on their attachment style and frequency of in-person communication as well as phone, text messaging, social network site (SNS), and electronic mail usage with partners. Texting and SNS communication were more frequent in 2011 than 2009. Attachment avoidance was related to less frequent phone use and texting, and greater email usage. Electronic communication channels (phone and texting) were related to positive relationship qualities, however, once accounting for attachment, only moderated effects were found. Interactions indicated texting was linked to more positive relationships for highly avoidant (but not less avoidant) participants. Additionally, email use was linked to more conflict for highly avoidant (but not less avoidant) participants. Finally, greater use of a SNS was positively associated with intimacy/support for those higher (but not lower) on attachment anxiety. This study illustrates how attachment can help to explain why the use of specific technology-based communication channels within romantic relationships may mean different things to different people, and that certain channels may be especially relevant in meeting insecurely attached individuals’ needs. 2013 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
a9fdf52d50e102648541ce8a8ca8d724
Static Detection of Second-Order Vulnerabilities in Web Applications
[ { "docid": "827493ff47cff1defaeafff2ef180dce", "text": "We present a static analysis algorithm for detecting security vulnerabilities in PHP, a popular server-side scripting language for building web applications. Our analysis employs a novel three-tier architecture to capture information at decreasing levels of granularity at the intrablock, intraprocedural, and interprocedural level. This architecture enables us to handle dynamic features unique to scripting languages such as dynamic typing and code inclusion, which have not been adequately addressed by previous techniques. We demonstrate the effectiveness of our approach by running our tool on six popular open source PHP code bases and finding 105 previously unknown security vulnerabilities, most of which we believe are remotely exploitable.", "title": "" } ]
[ { "docid": "c5ee2a4e38dfa27bc9d77edcd062612f", "text": "We perform transaction-level analyses of entrusted loans – the largest component of shadow banking in China. There are two types – affiliated and non-affiliated. The latter involve a much higher interest rate than the former and official bank loan rates, and largely flow into the real estate industry. Both involve firms with privileged access to cheap capital to channel funds to less privileged firms and increase when credit is tight. The pricing of entrusted loans, especially that of non-affiliated loans, incorporates fundamental and informational risks. Stock market reactions suggest that both affiliated and non-affiliated loans are fairly-compensated investments.", "title": "" }, { "docid": "87a7e7fe82a5768633b606e95727244d", "text": "Hashing is fundamental to many algorithms and data structures widely used in practice. For theoretical analysis of hashing, there have been two main approaches. First, one can assume that the hash function is truly random, mapping each data item independently and uniformly to the range. This idealized model is unrealistic because a truly random hash function requires an exponential number of bits to describe. Alternatively, one can provide rigorous bounds on performance when explicit families of hash functions are used, such as 2-universal or O(1)-wise independent families. For such families, performance guarantees are often noticeably weaker than for ideal hashing.\n In practice, however, it is commonly observed that simple hash functions, including 2-universal hash functions, perform as predicted by the idealized analysis for truly random hash functions. In this paper, we try to explain this phenomenon. We demonstrate that the strong performance of universal hash functions in practice can arise naturally from a combination of the randomness of the hash function and the data. Specifially, following the large body of literature on random sources and randomness extraction, we model the data as coming from a \"block source,\" whereby each new data item has some \"entropy\" given the previous ones. As long as the (Renyi) entropy per data item is sufficiently large, it turns out that the performance when choosing a hash function from a 2-universal family is essentially the same as for a truly random hash function. We describe results for several sample applications, including linear probing, balanced allocations, and Bloom filters.", "title": "" }, { "docid": "2d86a717ef4f83ff0299f15ef1df5b1b", "text": "Proactive interference (PI) refers to the finding that memory for recently studied (target) information can be vastly impaired by the previous study of other (nontarget) information. PI can be reduced in a number of ways, for instance, by directed forgetting of the prior nontarget information, the testing of the prior nontarget information, or an internal context change before study of the target information. Here we report the results of four experiments, in which we demonstrate that all three forms of release from PI are accompanied by a decrease in participants’ response latencies. Because response latency is a sensitive index of the size of participants’ mental search set, the results suggest that release from PI can reflect more focused memory search, with the previously studied nontarget items being largely eliminated from the search process. Our results thus provide direct evidence for a critical role of retrieval processes in PI release. 2012 Elsevier Inc. All rights reserved. Introduction buildup of PI is caused by a failure to distinguish items Proactive interference (PI) refers to the finding that memory for recently studied information can be vastly impaired by the previous study of further information (e.g., Underwood, 1957). In a typical PI experiment, participants study a (target) list of items and are later tested on it. In the PI condition, participants study further (nontarget) lists that precede encoding of the target information, whereas in the no-PI condition participants engage in an unrelated distractor task. Typically, recall of the target list is worse in the PI condition than the no-PI condition, which reflects the PI finding. PI has been extensively studied in the past century, has proven to be a very robust finding, and has been suggested to be one of the major causes of forgetting in everyday life (e.g., Underwood, 1957; for reviews, see Anderson & Neely, 1996; Crowder, 1976). Over the years, a number of theories have been put forward to account for PI, most of them suggesting a critical role of retrieval processes in this form of forgetting. For instance, temporal discrimination theory suggests that . All rights reserved. ie.uni-regensburg.de from the most recent target list from items that appeared on the earlier nontarget lists. Specifically, the theory assumes that at test participants are unable to restrict their memory search to the target list and instead search the entire set of items that have previously been exposed (Baddeley, 1990; Crowder, 1976; Wixted & Rohrer, 1993). Another retrieval account attributes PI to a generation failure. Here, reduced recall levels of the target items are thought to be due to the impaired ability to access the material’s correct memory representation (Dillon & Thomas, 1975). In contrast to these retrieval explanations of PI, some theories also suggested a role of encoding factors in PI, assuming that the prior study of other lists impairs subsequent encoding of the target list. For instance, attentional resources may deteriorate across item lists and cause the target material to be less well processed in the presence than the absence of the preceding lists (e.g., Crowder, 1976).", "title": "" }, { "docid": "f3459ff684d6309ac773c20e03f86183", "text": "We propose an algorithm to separate simultaneously speaking persons from each other, the “cocktail party problem”, using a single microphone. Our approach involves a deep recurrent neural networks regression to a vector space that is descriptive of independent speakers. Such a vector space can embed empirically determined speaker characteristics and is optimized by distinguishing between speaker masks. We call this technique source-contrastive estimation. The methodology is inspired by negative sampling, which has seen success in natural language processing, where an embedding is learned by correlating and decorrelating a given input vector with output weights. Although the matrix determined by the output weights is dependent on a set of known speakers, we only use the input vectors during inference. Doing so will ensure that source separation is explicitly speaker-independent. Our approach is similar to recent deep neural network clustering and permutation-invariant training research; we use weighted spectral features and masks to augment individual speaker frequencies while filtering out other speakers. We avoid, however, the severe computational burden of other approaches with our technique. Furthermore, by training a vector space rather than combinations of different speakers or differences thereof, we avoid the so-called permutation problem during training. Our algorithm offers an intuitive, computationally efficient response to the cocktail party problem, and most importantly boasts better empirical performance than other current techniques.", "title": "" }, { "docid": "e7f4fc00b911b9f593020c0ac4bd80ce", "text": "INTRODUCTION\nS2R (sigma-2 receptor)/Pgrmc1 (progesterone receptor membrane component 1) is a cytochrome-related protein that binds directly to heme and various pharmacological compounds. S2R(Pgrmc1) also associates with cytochrome P450 proteins, the EGFR receptor tyrosine kinase and the RNA-binding protein PAIR-BP1. S2R(Pgrmc1) is induced in multiple types of cancer, where it regulates tumor growth and is implicated in progesterone signaling. S2R(Pgrmc1) also increases cholesterol synthesis in non-cancerous cells and may have a role in modulating drug metabolizing P450 proteins.\n\n\nAREAS COVERED\nThis review covers the independent identification of S2R and Pgrmc1 and their induction in cancers, as well as the role of S2R(Pgrmc1) in increasing cholesterol metabolism and P450 activity. This article was formed through a PubMed literature search using, but not limited to, the terms sigma-2 receptor, Pgrmc1, Dap1, cholesterol and aromatase.\n\n\nEXPERT OPINION\nMultiple laboratories have shown that S2R(Pgrmc1) associates with various P450 proteins and increases cholesterol synthesis via Cyp51. However, the lipogenic role of S2R(Pgrmc1) is tissue-specific. Furthermore, the role of S2R(Pgrmc1) in regulating P450 proteins other than Cyp51 appears to be highly selective, with modest inhibitory activity for Cyp3A4 in vitro and a complex regulatory pattern for Cyp21. Cyp19/aromatase is a therapeutic target in breast cancer, and S2R(Pgrmc1) activated Cyp19 significantly in vitro but modestly in biochemical assays. In summary, S2R(Pgrmc1) is a promising therapeutic target for cancer and possibly cholesterol synthesis but research to date has not identified a major role in P450-mediated drug metabolism.", "title": "" }, { "docid": "05db9a684a537fdf1234e92047618e18", "text": "Globally the internet is been accessed by enormous people within their restricted domains. When the client and server exchange messages among each other, there is an activity that can be observed in log files. Log files give a detailed description of the activities that occur in a network that shows the IP address, login and logout durations, the user's behavior etc. There are several types of attacks occurring from the internet. Our focus of research in this paper is Denial of Service (DoS) attacks with the help of pattern recognition techniques in data mining. Through which the Denial of Service attack is identified. Denial of service is a very dangerous attack that jeopardizes the IT resources of an organization by overloading with imitation messages or multiple requests from unauthorized users.", "title": "" }, { "docid": "319a24bca0b0849e05ce8cce327c549b", "text": "This paper presents a summary of the Computational Linguistics and Clinical Psychology (CLPsych) 2015 shared and unshared tasks. These tasks aimed to provide apples-to-apples comparisons of various approaches to modeling language relevant to mental health from social media. The data used for these tasks is from Twitter users who state a diagnosis of depression or post traumatic stress disorder (PTSD) and demographically-matched community controls. The unshared task was a hackathon held at Johns Hopkins University in November 2014 to explore the data, and the shared task was conducted remotely, with each participating team submitted scores for a held-back test set of users. The shared task consisted of three binary classification experiments: (1) depression versus control, (2) PTSD versus control, and (3) depression versus PTSD. Classifiers were compared primarily via their average precision, though a number of other metrics are used along with this to allow a more nuanced interpretation of the performance measures.", "title": "" }, { "docid": "cf369f232ba023e675f322f42a20b2c2", "text": "Ring topology local area networks (LAN’s) using the “buffer insertion” access method have as yet received relatively little attention. In this paper we present details of a LAN of this.-, called SILK-system for integrated local communication (in German, “Kommunikation”). Sections of the paper describe the synchronous transmission technique of the ring channel, the time-multiplexed access of eight ports at each node, the “braided” interconnection for bypassing defective nodes, and the role of interface transformation units and user interfaces, as well as some traffic,characteristics and reliability aspects. SILK’S modularity and open system concept are demonstrated by the already implemented applications such as distributed text editing, local telephone or teletex exchange, and process control in a TV studio.", "title": "" }, { "docid": "926db14af35f9682c28a64e855fb76e5", "text": "This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.", "title": "" }, { "docid": "f3c76c415aa4555f3f9d4c347d3c5e87", "text": "Virtual worlds, set-up on the Internet, occur as a highly complex form of visual media. They foreshadow future developments, not only in leisure settings, but also in health care and business environments. The interaction between real-life and virtual worlds, i.e., inter-reality, has recently moved to the center of scientific interest (Bainbridge 2007). Particularly, the empirical assessment of the value of virtual embodiment and its outcomes is needed (Schultze 2010). Here, this paper aims to make a contribution. Reviewing prior media theories and corresponding conceptualizations such as presence, immersion, media literacy and emotions, we argue that in inter-reality, individual differences in perceiving and dealing with one’s own and other’s emotions influence an individual's performance. Providing construct operationalizations and model propositions, we suggest testing the theory in the context of competitive and socially interactive virtual worlds.", "title": "" }, { "docid": "e5b125bdb5a17cbe926c03c3bac6935c", "text": "We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-to-image translation framework to constrain the features extracted by the encoder network. Specifically, we require that the features extracted are able to reconstruct the images in both domains. In addition we require that the distribution of features extracted from images in the two domains are indistinguishable. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets.", "title": "" }, { "docid": "e0d553cc4ca27ce67116c62c49c53d23", "text": "We estimate a vehicle's speed, its wheelbase length, and tire track length by jointly estimating its acoustic wave pattern with a single passive acoustic sensor that records the vehicle's drive-by noise. The acoustic wave pattern is determined using the vehicle's speed, the Doppler shift factor, the sensor's distance to the vehicle's closest-point-of-approach, and three envelope shape (ES) components, which approximate the shape variations of the received signal's power envelope. We incorporate the parameters of the ES components along with estimates of the vehicle engine RPM, the number of cylinders, and the vehicle's initial bearing, loudness and speed to form a vehicle profile vector. This vector provides a fingerprint that can be used for vehicle identification and classification. We also provide possible reasons why some of the existing methods are unable to provide unbiased vehicle speed estimates using the same framework. The approach is illustrated using vehicle speed estimation and classification results obtained with field data.", "title": "" }, { "docid": "af2afb32b243af0706dd641324d63dc0", "text": "We present a qualitative evaluation of a number of free publicly available physics engines for simulation systems and game development. A brief overview of the aspects of a physics engine is presented accompanied by a comparison of the capabilities of each physics engine. Aspects that are investigated the accuracy and computational efficiency of the integrator properties, material properties, stacks, links, and collision detection system.", "title": "" }, { "docid": "2adde1812974f2d5d35d4c7e31ca7247", "text": "All currently available network intrusion detection (ID) systems rely upon a mechanism of data collection---passive protocol analysis---which is fundamentally flawed. In passive protocol analysis, the intrusion detection system (IDS) unobtrusively watches all traffic on the network, and scrutinizes it for patterns of suspicious activity. We outline in this paper two basic problems with the reliability of passive protocol analysis: (1) there isn't enough information on the wire on which to base conclusions about what is actually happening on networked machines, and (2) the fact that the system is passive makes it inherently \"fail-open,\" meaning that a compromise in the availability of the IDS doesn't compromise the availability of the network. We define three classes of attacks which exploit these fundamental problems---insertion, evasion, and denial of service attacks --and describe how to apply these three types of attacks to IP and TCP protocol analysis. We present the results of tests of the efficacy of our attacks against four of the most popular network intrusion detection systems on the market. All of the ID systems tested were found to be vulnerable to each of our attacks. This indicates that network ID systems cannot be fully trusted until they are fundamentally redesigned. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection http://www.robertgraham.com/mirror/Ptacek-Newsham-Evasion-98.html (1 of 55) [17/01/2002 08:32:46 p.m.]", "title": "" }, { "docid": "3dbafd997eeb5985df0f90a65ea17c9f", "text": "This paper reviews the extended Cauchy model and the four-parameter model for describing the wavelength and temperature effects of liquid crystal (LC) refractive indices. The refractive indices of nine commercial LCs, MLC-9200-000, MLC-9200-100, MLC-6608, MLC-6241-000, 5PCH, 5CB, TL-216, E7, and E44 are measured by the Multi-wavelength Abbe Refractometer. These experimental data are used to validate the theoretical models. Excellent agreement between experiment and theory is obtained.", "title": "" }, { "docid": "a357ce62099cd5b12c09c688c5b9736e", "text": "Considerations of personal identity bear on John Searle's Chinese Room argument, and on the opposed position that a computer itself could really understand a natural language. In this paper I develop the notion of a virtual person, modelled on the concept of virtual machines familiar in computer science. I show how Searle's argument, and J. Maloney's attempt to defend it, fail. I conclude that Searle is correct in holding that no digital machine could understand language, but wrong in holding that artificial minds are impossible: minds and persons are not the same as the machines, biological or electronic, that realize them.", "title": "" }, { "docid": "2164fbc381033f7be87d075440053c0e", "text": "Recently there has been a surge of interest in neural architectures for complex structured learning tasks. Along this track, we are addressing the supervised task of relation extraction and named-entity recognition via recursive neural structures and deep unsupervised feature learning. Our models are inspired by several recent works in deep learning for natural language. We have extended the previous models, and evaluated them in various scenarios, for relation extraction and namedentity recognition. In the models, we avoid using any external features, so as to investigate the power of representation instead of feature engineering. We implement the models and proposed some more general models for future work. We will briefly review previous works on deep learning and give a brief overview of recent progresses relation extraction and named-entity recognition.", "title": "" }, { "docid": "830a585529981bd5b61ac5af3055d933", "text": "Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Glaucoma is one of the most common causes of blindness. The manual examination of optic disk (OD) is a standard procedure used for detecting glaucoma. In this paper, we present an automatic OD parameterization technique based on segmented OD and cup regions obtained from monocular retinal images. A novel OD segmentation method is proposed which integrates the local image information around each point of interest in multidimensional feature space to provide robustness against variations found in and around the OD region. We also propose a novel cup segmentation method which is based on anatomical evidence such as vessel bends at the cup boundary, considered relevant by glaucoma experts. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A multi-stage strategy is employed to derive a reliable subset of vessel bends called r-bends followed by a local spline fitting to derive the desired cup boundary. The method has been evaluated on 138 images comprising 33 normal and 105 glaucomatous images against three glaucoma experts. The obtained segmentation results show consistency in handling various geometric and photometric variations found across the dataset. The estimation error of the method for vertical cup-to-disk diameter ratio is 0.09/0.08 (mean/standard deviation) while for cup-to-disk area ratio it is 0.12/0.10. Overall, the obtained qualitative and quantitative results show effectiveness in both segmentation and subsequent OD parameterization for glaucoma assessment.", "title": "" }, { "docid": "f981f9a15062f4187dfa7ac71f19d54a", "text": "Background\nSoccer is one of the most widely played sports in the world. However, soccer players have an increased risk of lower limb injury. These injuries may be caused by both modifiable and non-modifiable factors, justifying the adoption of an injury prevention program such as the Fédération Internationale de Football Association (FIFA) 11+. The purpose of this study was to evaluate the efficacy of the FIFA 11+ injury prevention program for soccer players.\n\n\nMethodology\nThis meta-analysis was based on the PRISMA 2015 protocol. A search using the keywords \"FIFA,\" \"injury prevention,\" and \"football\" found 183 articles in the PubMed, MEDLINE, LILACS, SciELO, and ScienceDirect databases. Of these, 6 studies were selected, all of which were randomized clinical trials.\n\n\nResults\nThe sample consisted of 6,344 players, comprising 3,307 (52%) in the intervention group and 3,037 (48%) in the control group. The FIFA 11+ program reduced injuries in soccer players by 30%, with an estimated relative risk of 0.70 (95% confidence interval, 0.52-0.93, p = 0.01). In the intervention group, 779 (24%) players had injuries, while in the control group, 1,219 (40%) players had injuries. However, this pattern was not homogeneous throughout the studies because of clinical and methodological differences in the samples. This study showed no publication bias.\n\n\nConclusion\nThe FIFA 11+ warm-up program reduced the risk of injury in soccer players by 30%.", "title": "" } ]
scidocsrr
ac9a1c1150e7f8f072e93893dc1c6401
Big Data Methods for Computational Linguistics
[ { "docid": "889dd22fcead3ce546e760bda8ef4980", "text": "We explore unsupervised approaches to relation extraction between two named entities; for instance, the semantic bornIn relation between a person and location entity. Concretely, we propose a series of generative probabilistic models, broadly similar to topic models, each which generates a corpus of observed triples of entity mention pairs and the surface syntactic dependency path between them. The output of each model is a clustering of observed relation tuples and their associated textual expressions to underlying semantic relation types. Our proposed models exploit entity type constraints within a relation as well as features on the dependency path between entity mentions. We examine effectiveness of our approach via multiple evaluations and demonstrate 12% error reduction in precision over a state-of-the-art weakly supervised baseline.", "title": "" }, { "docid": "2e288b78b50cd771f4c918794c3e9046", "text": "Traditional approaches to Relation Extraction from text require manually defining the relations to be extracted. We propose here an approach to automatically discovering relevant relations, given a large text corpus plus an initial ontology defining hundreds of noun categories (e.g., Athlete, Musician, Instrument). Our approach discovers frequently stated relations between pairs of these categories, using a two step process. For each pair of categories (e.g., Musician and Instrument) it first coclusters the text contexts that connect known instances of the two categories, generating a candidate relation for each resulting cluster. It then applies a trained classifier to determine which of these candidate relations is semantically valid. Our experiments apply this to a text corpus containing approximately 200 million web pages and an ontology containing 122 categories from the NELL system [Carlson et al., 2010b], producing a set of 781 proposed candidate relations, approximately half of which are semantically valid. We conclude this is a useful approach to semi-automatic extension of the ontology for large-scale information extraction systems such as NELL.", "title": "" } ]
[ { "docid": "32ae0b0c5b3ca3a7ede687872d631d29", "text": "Background—The benefit of catheter-based reperfusion for acute myocardial infarction (MI) is limited by a 5% to 15% incidence of in-hospital major ischemic events, usually caused by infarct artery reocclusion, and a 20% to 40% need for repeat percutaneous or surgical revascularization. Platelets play a key role in the process of early infarct artery reocclusion, but inhibition of aggregation via the glycoprotein IIb/IIIa receptor has not been prospectively evaluated in the setting of acute MI. Methods and Results —Patients with acute MI of,12 hours’ duration were randomized, on a double-blind basis, to placebo or abciximab if they were deemed candidates for primary PTCA. The primary efficacy end point was death, reinfarction, or any (urgent or elective) target vessel revascularization (TVR) at 6 months by intention-to-treat (ITT) analysis. Other key prespecified end points were early (7 and 30 days) death, reinfarction, or urgent TVR. The baseline clinical and angiographic variables of the 483 (242 placebo and 241 abciximab) patients were balanced. There was no difference in the incidence of the primary 6-month end point (ITT analysis) in the 2 groups (28.1% and 28.2%, P50.97, of the placebo and abciximab patients, respectively). However, abciximab significantly reduced the incidence of death, reinfarction, or urgent TVR at all time points assessed (9.9% versus 3.3%, P50.003, at 7 days; 11.2% versus 5.8%, P50.03, at 30 days; and 17.8% versus 11.6%, P50.05, at 6 months). Analysis by actual treatment with PTCA and study drug demonstrated a considerable effect of abciximab with respect to death or reinfarction: 4.7% versus 1.4%, P50.047, at 7 days; 5.8% versus 3.2%, P50.20, at 30 days; and 12.0% versus 6.9%, P50.07, at 6 months. The need for unplanned, “bail-out” stenting was reduced by 42% in the abciximab group (20.4% versus 11.9%, P50.008). Major bleeding occurred significantly more frequently in the abciximab group (16.6% versus 9.5%, P 0.02), mostly at the arterial access site. There was no intracranial hemorrhage in either group. Conclusions—Aggressive platelet inhibition with abciximab during primary PTCA for acute MI yielded a substantial reduction in the acute (30-day) phase for death, reinfarction, and urgent target vessel revascularization. However, the bleeding rates were excessive, and the 6-month primary end point, which included elective revascularization, was not favorably affected.(Circulation. 1998;98:734-741.)", "title": "" }, { "docid": "19d9b6407ee0b4c66a580ec8e2de3ece", "text": "Over the years, indoor scene parsing has attracted a growing interest in the computer vision community. Existing methods have typically focused on diverse subtasks of this challenging problem. In particular, while some of them aim at segmenting the image into regions, such as object or surface instances, others aim at inferring the semantic labels of given regions, or their support relationships. These different tasks are typically treated as separate ones. However, they bear strong connections: good regions should respect the semantic labels, support can only be defined for meaningful regions, support relationships strongly depend on semantics. In this paper, we therefore introduce an approach to jointly segment the instances and infer their semantic labels and support relationships from a single input image. By exploiting a hierarchical segmentation, we formulate our problem as that of jointly finding the regions in the hierarchy that correspond to instances and estimating their class labels and pairwise support relationships. We express this via a Markov Random Field, which allows us to further encode links between the different types of variables. Inference in this model can be done exactly via integer linear programming, and we learn its parameters in a structural SVM framework. Our experiments on NYUv2 demonstrate the benefits of reasoning jointly about all these subtasks of indoor scene parsing.", "title": "" }, { "docid": "4f6ce186679f9ab4f0aaada92ccf5a84", "text": "Sensor networks have a significant potential in diverse applications some of which are already beginning to be deployed in areas such as environmental monitoring. As the application logic becomes more complex, programming difficulties are becoming a barrier to adoption of these networks. The difficulty in programming sensor networks is not only due to their inherently distributed nature but also the need for mechanisms to address their harsh operating conditions such as unreliable communications, faulty nodes, and extremely constrained resources. Researchers have proposed different programming models to overcome these difficulties with the ultimate goal of making programming easy while making full use of available resources. In this article, we first explore the requirements for programming models for sensor networks. Then we present a taxonomy of the programming models, classified according to the level of abstractions they provide. We present an evaluation of various programming models for their responsiveness to the requirements. Our results point to promising efforts in the area and a discussion of the future directions of research in this area.", "title": "" }, { "docid": "c04cf54a40cd84961657bf50153ff68b", "text": "Neural IR models, such as DRMM and PACRR, have achieved strong results by successfully capturing relevance matching signals. We argue that the context of these matching signals is also important. Intuitively, when extracting, modeling, and combining matching signals, one would like to consider the surrounding text(local context) as well as other signals from the same document that can contribute to the overall relevance score. In this work, we highlight three potential shortcomings caused by not considering context information and propose three neural ingredients to address them: a disambiguation component, cascade k-max pooling, and a shuffling combination layer. Incorporating these components into the PACRR model yields Co-PACER, a novel context-aware neural IR model. Extensive comparisons with established models on TREC Web Track data confirm that the proposed model can achieve superior search results. In addition, an ablation analysis is conducted to gain insights into the impact of and interactions between different components. We release our code to enable future comparisons.", "title": "" }, { "docid": "428fea9d583921320c0377b483b1280e", "text": "Purpose: The purpose of this paper is to perform a systematic review of articles that have used the unified theory of acceptance and use of technology (UTAUT). Design/methodology/approach: The results produced in this research are based on the literature analysis of 174 existing articles on the UTAUT model. This has been performed by collecting data including demographic details, methodological details, limitations, and significance of relationships between the constructs from the available articles based on the UTAUT. Findings: The findings were categorised by dividing the articles that used the UTAUT model into types of information systems used, research approach and methods employed, and tools and techniques implemented to analyse results. We also perform the weight analysis of variables and found that performance expectancy and behavioural intention qualified for the best predictor category. The research also analysed and presented the limitations of existing studies. Research limitations/implications: The search activities were centered on occurrences of keywords to avoid tracing a large number of publications where these keywords might have been used as casual words in the main text. However, we acknowledge that there may be a number of studies, which lack keywords in the title, but still focus upon UTAUT in some form. Originality/value: This is the first research of its type, which has extensively examined the literature on the UTAUT and provided the researchers with the accumulative knowledge about the model.", "title": "" }, { "docid": "9ded4056779fd98d41d7863015fa78bd", "text": "Preliminary notes Pneumatic artificial muscles belong to the group of nonconventional actuators intended for various biomedical or industrial applications. The one-DOF actuator (DOF – degree of freedom) consists of two muscles acting in opposition to each other (antagonistic connection) with simplified control scheme (stiffness control loop is excluded in order to simplify the control and to achieve as high stiffness as possible). The actuator exhibits nonlinear behaviour attributable to the compliant nature of pneumatic muscles. The muscle model is based on modified two-element muscle model consisting of a variable damper and a variable spring connected in parallel. From kinematic point of view, it is a simple mechanism with one degree of freedom (arm with added mass rotating around one revolute axis) with plane of movement being parallel to the ground. The main purpose of this model is a control design, so the stringency of accuracy criteria is lowered compared to a truth model. The model is validated using the results of dynamic experiments with real plant in laboratory conditions.", "title": "" }, { "docid": "de5c439731485929416b0e729f7f79b2", "text": "The feedback dynamics from mosquito to human and back to mosquito involve considerable time delays due to the incubation periods of the parasites. In this paper, taking explicit account of the incubation periods of parasites within the human and the mosquito, we first propose a delayed Ross-Macdonald model. Then we calculate the basic reproduction number R0 and carry out some sensitivity analysis of R0 on the incubation periods, that is, to study the effect of time delays on the basic reproduction number. It is shown that the basic reproduction number is a decreasing function of both time delays. Thus, prolonging the incubation periods in either humans or mosquitos (via medicine or control measures) could reduce the prevalence of infection.", "title": "" }, { "docid": "005c254394245c4b31812616b369d637", "text": "Traditional methods on aircraft detection in remote sensing images rely on handcrafted design, which is difficult to detect and recognize the target in complex scenes and multiscale conditions. In this paper, we tackle these two problems by proposing a method for aircraft detection based on the fully convolutional neural network(FCNN). FCNN can obtain the location of the aircraft quickly and directly by minimizing a multi-task loss. Through data augmentation and transfer learning, the classification accuracy of FCN is much improved. In order to recognize small targets, we combine the resolution information with priori knowledge of aircraft to construct image pyramid structure on the test images. The experimental results show that the method with less parameters has higher accuracy and the model is simple to train.", "title": "" }, { "docid": "ddd1e06761a476dc02397a4381fbe8f8", "text": "The potential for physical activity and fitness to improve cognitive function, learning and academic achievement in children has received attention by researchers and policy makers. This paper reports a systematic approach to identification, analysis and review of published studies up to early 2009. A threestep search method was adopted to identify studies that used measures of physical activity or fitness to assess either degree of association with or effect on a) academic achievement and b) cognitive performance. A total of 18 studies including one randomised control trial, six quasi-experimental and 11 correlational studies were included for data extraction. No studies meeting criteria that examined the links between physical activity and cognitive function were found. Weak positive associations were found between both physical activity and fitness and academic achievement and fitness and elements of cognitive function, but this was not supported by intervention studies. There is insufficient evidence to conclude that additional physical education time increases academic achievement; however there is no evidence that it is detrimental. The quality and depth of the evidence base is limited. Further research with rigour beyond correlational studies is essential.", "title": "" }, { "docid": "9444dbd49ea83a703371326795001d09", "text": "Sequence clustering is a technique of bioinformatics that is used to discover the properties of sequences by grouping them into clusters and assigning each sequence to one of those clusters. In business process mining, the goal is also to extract sequence behaviour from an event log but the problem is often simplified by assuming that each event is already known to belong to a given process and process instance. In this paper, we describe two experiments where this information is not available. One is based on a real-world case study of observing a software development team for three weeks. The other is based on simulation and shows that it is possible to recover the original behaviour in a fully automated way. In both experiments, sequence clustering plays a central role.", "title": "" }, { "docid": "2fdcfab59f54410627ed13c2e46689cd", "text": "The field of software visualization (SV) investigates approaches and techniques for static and dynamic graphical representations of algorithms, programs (code), and processed data. SV is concerned primarily with the analysis of programs and their development. The goal is to improve our understanding of inherently invisible and intangible software, particularly when dealing with large information spaces that characterize domains like software maintenance, reverse engineering, and collaborative development. The main challenge is to find effective mappings from different software aspects to graphical representations using visual metaphors. This paper provides an overview of the SV research, describes current research directions, and includes an extensive list of recommended readings.", "title": "" }, { "docid": "d88067f2dbcd55dae083134b5eeb7868", "text": "Current state-of-the-art human activity recognition is fo cused on the classification of temporally trimmed videos in which only one action occurs per frame. We propose a simple, yet effective, method for the temporal detection of activities in temporally untrimmed videos with the help of untrimmed classification. Firstly, our model predicts th e top k labels for each untrimmed video by analysing global video-level features. Secondly, frame-level binary class ification is combined with dynamic programming to generate the temporally trimmed activity proposals . Finally, each proposal is assigned a label based on the global label, and scored with the score of the temporal activity proposal and the global score. Ultimately, we show that untrimmed video classification models can be used as stepping stone for temporal detection.", "title": "" }, { "docid": "fee574207e3985ea3c697f831069fa8b", "text": "This paper focuses on the utilization of wireless networkin g in the robotics domain. Many researchers have already equipped their robot s with wireless communication capabilities, stimulated by the observation that multi-robot systems tend to have several advantages over their single-robot counterpa r s. Typically, this integration of wireless communication is tackled in a quite pragmat ic manner, only a few authors presented novel Robotic Ad Hoc Network (RANET) prot oc ls that were designed specifically with robotic use cases in mind. This is in harp contrast with the domain of vehicular ad hoc networks (VANET). This observati on is the starting point of this paper. If the results of previous efforts focusing on VANET protocols could be reused in the RANET domain, this could lead to rapid progre ss in the field of networked robots. To investigate this possibility, this paper rovides a thorough overview of the related work in the domain of robotic and vehicular ad h oc networks. Based on this information, an exhaustive list of requirements is d efined for both types. It is concluded that the most significant difference lies in the fact that VANET protocols are oriented towards low throughput messaging, while R ANET protocols have to support high throughput media streaming as well. Althoug h not always with equal importance, all other defined requirements are valid for bot h protocols. This leads to the conclusion that cross-fertilization between them is an appealing approach for future RANET research. To support such developments, this pap er concludes with the definition of an appropriate working plan.", "title": "" }, { "docid": "cc980260540d9e9ae8e7219ff9424762", "text": "The persuasive design of e-commerce websites has been shown to support people with online purchases. Therefore, it is important to understand how persuasive applications are used and assimilated into e-commerce website designs. This paper demonstrates how the PSD model’s persuasive features could be used to build a bridge supporting the extraction and evaluation of persuasive features in such e-commerce websites; thus practically explaining how feature implementation can enhance website persuasiveness. To support a deeper understanding of persuasive e-commerce website design, this research, using the Persuasive Systems Design (PSD) model, identifies the distinct persuasive features currently assimilated in ten successful e-commerce websites. The results revealed extensive use of persuasive features; particularly features related to dialogue support, credibility support, and primary task support; thus highlighting weaknesses in the implementation of social support features. In conclusion we suggest possible ways for enhancing persuasive feature implementation via appropriate contextual examples and explanation.", "title": "" }, { "docid": "4cf03c95a9d938ca76266154f43d8660", "text": "Fitts' law, a one-dimensional model of human movement, is commonly applied to two-dimensional target acquisition tasks on interactive computing systems. For rectangular targets, such as words, it is demonstrated that the model can break down and yield unrealistically low (even negative!) ratings for a task's index of difficulty (ID). The Shannon formulation is shown to partially correct this problem, since ID is always ≥ 0 bits. As well, two alternative interpretations “target width” are introduced that accommodate the two-dimensional nature of tasks. Results of an experiment are presented that show a significant improvement in the model's performance using the suggested changes.", "title": "" }, { "docid": "5db42e1ef0e0cf3d4c1c3b76c9eec6d2", "text": "Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.", "title": "" }, { "docid": "b24fc322e0fec700ec0e647c31cfd74d", "text": "Organometal trihalide perovskite solar cells offer the promise of a low-cost easily manufacturable solar technology, compatible with large-scale low-temperature solution processing. Within 1 year of development, solar-to-electric power-conversion efficiencies have risen to over 15%, and further imminent improvements are expected. Here we show that this technology can be successfully made compatible with electron acceptor and donor materials generally used in organic photovoltaics. We demonstrate that a single thin film of the low-temperature solution-processed organometal trihalide perovskite absorber CH3NH3PbI3-xClx, sandwiched between organic contacts can exhibit devices with power-conversion efficiency of up to 10% on glass substrates and over 6% on flexible polymer substrates. This work represents an important step forward, as it removes most barriers to adoption of the perovskite technology by the organic photovoltaic community, and can thus utilize the extensive existing knowledge of hybrid interfaces for further device improvements and flexible processing platforms.", "title": "" }, { "docid": "daf751e821c730db906c40ccf4678a90", "text": "Data provided by Internet of Things (IoT) are time series and have some specific characteristics that must be considered with regard to storage and management. IoT data is very likely to be stored in NoSQL system databases where there are some particular engine and compaction strategies to manage time series data. In this article, two of these strategies found in the open source Cassandra database system are described, analyzed and compared. The configuration of these strategies is not trivial and may be very time consuming. To provide indicators, the strategy with the best time performance had its main parameter tested along 14 different values and results are shown, related to both response time and storage space needed. The results may help users to configure their IoT NoSQL databases in an efficient setup, may help designers to improve database compaction strategies or encourage the community to set new default values for the compaction strategies.", "title": "" }, { "docid": "3beddb909ff11d8e9399060bee4dfebf", "text": "Cloud technology elevates the potential of robotics with which robots possessing various capabilities and resources may share data and combine new skills through cooperation. With multiple robots, a cloud robotic system enables intensive and complicated tasks to be carried out in an optimal and cooperative manner. Multisensor data retrieval (MSDR) is one of the key fundamental tasks to share the resources. Having attracted wide attention, MSDR is facing severe technical challenges. For example, MSDR is particularly difficult when cloud cluster hosts accommodate unpredictable data requests triggered by multiple robots operating in parallel. In these cases, near real-time responses are essential while addressing the problem of the synchronization of multisensor data simultaneously. In this paper, we present a framework targeting near real-time MSDR, which grants asynchronous access to the cloud from the robots. We propose a market-based management strategy for efficient data retrieval. It is validated by assessing several quality-of-service (QoS) criteria, with emphasis on facilitating data retrieval in near real-time. Experimental results indicate that the MSDR framework is able to achieve excellent performance under the proposed management strategy in typical cloud robotic scenarios.", "title": "" }, { "docid": "4029cc928eca08b08e2e43b144291423", "text": "We propose a method to aggregate and organize a large, multi-source dataset of news articles into a collection of major stories, and automatically name and visualize these stories in a working system. The approach is able to run online, as new articles are added, processing 4 million news articles from 20 news sources, and extracting 80000 major stories, some of which span several years. The visual interface consists of lanes of timelines, each annotated with information that is deemed important for the story, including extracted quotations. The working system allows a user to search and navigate 8 years of story information.", "title": "" } ]
scidocsrr