query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
23
| negative_passages
listlengths 4
100
| subset
stringclasses 7
values |
---|---|---|---|---|
51f87aa79aabc176871d505e427c0ded | Intent Classification of Short-Text on Social Media | [
{
"docid": "ac46e6176377612544bb74c064feed67",
"text": "The existence and use of standard test collections in information retrieval experimentation allows results to be compared between research groups and over time. Such comparisons, however, are rarely made. Most researchers only report results from their own experiments, a practice that allows lack of overall improvement to go unnoticed. In this paper, we analyze results achieved on the TREC Ad-Hoc, Web, Terabyte, and Robust collections as reported in SIGIR (1998–2008) and CIKM (2004–2008). Dozens of individual published experiments report effectiveness improvements, and often claim statistical significance. However, there is little evidence of improvement in ad-hoc retrieval technology over the past decade. Baselines are generally weak, often being below the median original TREC system. And in only a handful of experiments is the score of the best TREC automatic run exceeded. Given this finding, we question the value of achieving even a statistically significant result over a weak baseline. We propose that the community adopt a practice of regular longitudinal comparison to ensure measurable progress, or at least prevent the lack of it from going unnoticed. We describe an online database of retrieval runs that facilitates such a practice.",
"title": ""
},
{
"docid": "7ec88ea12923d546416fbc6a72e6ff5d",
"text": "This paper proposes to study the problem of identifying intention posts in online discussion forums. For example, in a discussion forum, a user wrote “I plan to buy a camera,” which indicates a buying intention. This intention can be easily exploited by advertisers. To the best of our knowledge, there is still no reported study of this problem. Our research found that this problem is particularly suited to transfer learning because in different domains, people express the same intention in similar ways. We then propose a new transfer learning method which, unlike a general transfer learning algorithm, exploits several special characteristics of the problem. Experimental results show that the proposed method outperforms several strong baselines, including supervised learning in the target domain and a recent transfer learning method.",
"title": ""
},
{
"docid": "3b8f29b9cc930200e079df0ea2f30f68",
"text": "In this paper, we propose to study the problem of identifying and classifying tweets into intent categories. For example, a tweet “I wanna buy a new car” indicates the user’s intent for buying a car. Identifying such intent tweets will have great commercial value among others. In particular, it is important that we can distinguish different types of intent tweets. We propose to classify intent tweets into six categories, namely Food & Drink, Travel, Career & Education, Goods & Services, Event & Activities and Trifle. We propose a semi-supervised learning approach to categorizing intent tweets into the six categories. We construct a test collection by using a bootstrap method. Our experimental results show that our approach is effective in inferring intent categories for tweets.",
"title": ""
},
{
"docid": "841f2ab48d111a6b70b2a3171c155f44",
"text": "In this paper we present SPADE, a new algorithm for fast discovery of Sequential Patterns. The existing solutions to this problem make repeated database scans, and use complex hash structures which have poor locality. SPADE utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations. All sequences are discovered in only three database scans. Experiments show that SPADE outperforms the best previous algorithm by a factor of two, and by an order of magnitude with some pre-processed data. It also has linear scalability with respect to the number of input-sequences, and a number of other database parameters. Finally, we discuss how the results of sequence mining can be applied in a real application domain.",
"title": ""
}
] | [
{
"docid": "441633276271b94dc1bd3e5e28a1014d",
"text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.",
"title": ""
},
{
"docid": "d2f5f5b42d732a5d27310e4f2d76116a",
"text": "This paper reports on a cluster analysis of pervasive games through a bottom-up approach based upon 120 game examples. The basis for the clustering algorithm relies on the identification of pervasive gameplay design patterns for each game from a set of 75 possible patterns. The resulting hierarchy presents a view of the design space of pervasive games, and details of clusters and novel gameplay features are described. The paper concludes with a view over how the clusters relate to existing genres and models of pervasive games.",
"title": ""
},
{
"docid": "91726dd6fb83be434766a05bdaba7a7a",
"text": "Bargaining with reading habit is no need. Reading is not kind of something sold that you can take or not. It is a thing that will change your life to life better. It is the thing that will give you many things around the world and this universe, in the real world and here after. As what will be given by this practical digital libraries books bytes and bucks, how can you bargain with the thing that has many benefits for you?",
"title": ""
},
{
"docid": "c2aed51127b8753e4b71da3b331527cd",
"text": "In this paper, we present the theory and design of interval type-2 fuzzy logic systems (FLSs). We propose an efficient and simplified method to compute the input and antecedent operations for interval type-2 FLSs; one that is based on a general inference formula for them. We introduce the concept of upper and lower membership functions (MFs) and illustrate our efficient inference method for the case of Gaussian primary MFs. We also propose a method for designing an interval type-2 FLS in which we tune its parameters. Finally, we design type-2 FLSs to perform time-series forecasting when a nonstationary time-series is corrupted by additive noise where SNR is uncertain and demonstrate improved performance over type-1 FLSs.",
"title": ""
},
{
"docid": "b6160256dd6877fea4cec96b74ebc03a",
"text": "A cascaded long short-term memory (LSTM) architecture with discriminant feature learning is proposed for the task of question answering on real world images. The proposed LSTM architecture jointly learns visual features and parts of speech (POS) tags of question words or tokens. Also, dimensionality of deep visual features is reduced by applying Principal Component Analysis (PCA) technique. In this manner, the proposed question answering model captures the generic pattern of question for a given context of image which is just not constricted within the training dataset. Empirical outcome shows that this kind of approach significantly improves the accuracy. It is believed that this kind of generic learning is a step towards a real-world visual question answering (VQA) system which will perform well for all possible forms of open-ended natural language queries.",
"title": ""
},
{
"docid": "f2d2979ca63d47ba33fffb89c16b9499",
"text": "Shor and Grover demonstrated that a quantum computer can outperform any classical computer in factoring numbers and in searching a database by exploiting the parallelism of quantum mechanics. Whereas Shor's algorithm requires both superposition and entanglement of a many-particle system, the superposition of single-particle quantum states is sufficient for Grover's algorithm. Recently, the latter has been successfully implemented using Rydberg atoms. Here we propose an implementation of Grover's algorithm that uses molecular magnets, which are solid-state systems with a large spin; their spin eigenstates make them natural candidates for single-particle systems. We show theoretically that molecular magnets can be used to build dense and efficient memory devices based on the Grover algorithm. In particular, one single crystal can serve as a storage unit of a dynamic random access memory device. Fast electron spin resonance pulses can be used to decode and read out stored numbers of up to 105, with access times as short as 10-10 seconds. We show that our proposal should be feasible using the molecular magnets Fe8 and Mn12.",
"title": ""
},
{
"docid": "a55eed627afaf39ee308cc9e0e10a698",
"text": "Perspective-taking is a complex cognitive process involved in social cognition. This positron emission tomography (PET) study investigated by means of a factorial design the interaction between the emotional and the perspective factors. Participants were asked to adopt either their own (first person) perspective or the (third person) perspective of their mothers in response to situations involving social emotions or to neutral situations. The main effect of third-person versus first-person perspective resulted in hemodynamic increase in the medial part of the superior frontal gyrus, the left superior temporal sulcus, the left temporal pole, the posterior cingulate gyrus, and the right inferior parietal lobe. A cluster in the postcentral gyrus was detected in the reverse comparison. The amygdala was selectively activated when subjects were processing social emotions, both related to self and other. Interaction effects were identified in the left temporal pole and in the right postcentral gyrus. These results support our prediction that the frontopolar, the somatosensory cortex, and the right inferior parietal lobe are crucial in the process of self/ other distinction. In addition, this study provides important building blocks in our understanding of social emotion processing and human empathy.",
"title": ""
},
{
"docid": "b939227b7de6ef57c2d236fcb01b7bfc",
"text": "We propose a speed estimation method with human body accelerations measured on the chest by a tri-axial accelerometer. To estimate the speed we segmented the acceleration signal into strides measuring stride time, and applied two neural networks into the patterns parameterized from each stride calculating stride length. The first neural network determines whether the subject walks or runs, and the second neural network with different node interactions according to the subject's status estimates stride length. Walking or running speed is calculated with the estimated stride length divided by the measured stride time. The neural networks were trained by patterns obtained from 15 subjects and then validated by 2 untrained subjects' patterns. The result shows good agreement between actual and estimated speeds presenting the linear correlation coefficient r = 0.9874. We also applied the method to the real field and track data.",
"title": ""
},
{
"docid": "5ceb6e39c8f826c0a7fd0e5086090a5f",
"text": "Mobile botnet phenomenon is gaining popularity among malware writers in order to exploit vulnerabilities in smartphones. In particular, mobile botnets enable illegal access to a victim’s smartphone, can compromise critical user data and launch a DDoS attack through Command and Control (C&C). In this article, we propose a static analysis approach, DeDroid, to investigate botnet-specific properties that can be used to detect mobile applications with botnet intensions. Initially, we identify critical features by observing code behavior of the few known malware binaries having C&C features. Then, we compare the identified features with the malicious and benign applications of Drebin dataset. The results show against the comparative analysis that, Drebin dataset has 35% malicious applications which qualify as botnets. Upon closer examination, 90% of the potential botnets are confirmed as botnets. Similarly, for comparative analysis against benign applications having C&C features, DeDroid has achieved adequate detection accuracy. In addition, DeDroid has achieved high accuracy with negligible false positive rate while making decision for state-of-the-art malicious applications.",
"title": ""
},
{
"docid": "550ac6565bf42f42ec35d63f8c3b1e01",
"text": "A fully planar ultrawideband phased array with wide scan and low cross-polarization performance is introduced. The array is based on Munk's implementation of the current sheet concept, but it employs a novel feeding scheme for the tightly coupled horizontal dipoles that enables simple PCB fabrication. This feeding eliminates the need for “cable organizers” and external baluns, and when combined with dual-offset dual-polarized lattice arrangements the array can be implemented in a modular, tile-based fashion. Simple physical explanations and circuit models are derived to explain the array's operation and guide the design process. The theory and insights are subsequently used to design an exemplary dual-polarized infinite array with 5:1 bandwidth and VSWR <; 2.1 at broadside, and cross-polarization ≈ -15 dB out to θ = 45° in the D- plane.",
"title": ""
},
{
"docid": "04c0a4613ab0ec7fd77ac5216a17bd1d",
"text": "Many contemporary biomedical applications such as physiological monitoring, imaging, and sequencing produce large amounts of data that require new data processing and visualization algorithms. Algorithms such as principal component analysis (PCA), singular value decomposition and random projections (RP) have been proposed for dimensionality reduction. In this paper we propose a new random projection version of the fuzzy c-means (FCM) clustering algorithm denoted as RPFCM that has a different ensemble aggregation strategy than the one previously proposed, denoted as ensemble FCM (EFCM). RPFCM is more suitable than EFCM for big data sets (large number of points, n). We evaluate our method and compare it to EFCM on synthetic and real datasets.",
"title": ""
},
{
"docid": "69d826aa8309678cf04e2870c23a99dd",
"text": "Contemporary analyses of cell metabolism have called out three metabolites: ATP, NADH, and acetyl-CoA, as sentinel molecules whose accumulation represent much of the purpose of the catabolic arms of metabolism and then drive many anabolic pathways. Such analyses largely leave out how and why ATP, NADH, and acetyl-CoA (Figure 1 ) at the molecular level play such central roles. Yet, without those insights into why cells accumulate them and how the enabling properties of these key metabolites power much of cell metabolism, the underlying molecular logic remains mysterious. Four other metabolites, S-adenosylmethionine, carbamoyl phosphate, UDP-glucose, and Δ2-isopentenyl-PP play similar roles in using group transfer chemistry to drive otherwise unfavorable biosynthetic equilibria. This review provides the underlying chemical logic to remind how these seven key molecules function as mobile packets of cellular currencies for phosphoryl transfers (ATP), acyl transfers (acetyl-CoA, carbamoyl-P), methyl transfers (SAM), prenyl transfers (IPP), glucosyl transfers (UDP-glucose), and electron and ADP-ribosyl transfers (NAD(P)H/NAD(P)+) to drive metabolic transformations in and across most primary pathways. The eighth key metabolite is molecular oxygen (O2), thermodynamically activated for reduction by one electron path, leaving it kinetically stable to the vast majority of organic cellular metabolites.",
"title": ""
},
{
"docid": "df35b679204e0729266a1076685600a1",
"text": "A new innovations state space modeling framework, incorporating Box-Cox transformations, Fourier series with time varying coefficients and ARMA error correction, is introduced for forecasting complex seasonal time series that cannot be handled using existing forecasting models. Such complex time series include time series with multiple seasonal periods, high frequency seasonality, non-integer seasonality and dual-calendar effects. Our new modelling framework provides an alternative to existing exponential smoothing models, and is shown to have many advantages. The methods for initialization and estimation, including likelihood evaluation, are presented, and analytical expressions for point forecasts and interval predictions under the assumption of Gaussian errors are derived, leading to a simple, comprehensible approach to forecasting complex seasonal time series. Our trigonometric formulation is also presented as a means of decomposing complex seasonal time series, which cannot be decomposed using any of the existing decomposition methods. The approach is useful in a broad range of applications, and we illustrate its versatility in three empirical studies where it demonstrates excellent forecasting performance over a range of prediction horizons. In addition, we show that our trigonometric decomposition leads to the identification and extraction of seasonal components, which are otherwise not apparent in the time series plot itself.",
"title": ""
},
{
"docid": "89875f4c0d70e655dd1ff9ffef7c04c2",
"text": "Flexible electronics incorporate all the functional attributes of conventional rigid electronics in formats that have been altered to survive mechanical deformations. Understanding the evolution of device performance during bending, stretching, or other mechanical cycling is, therefore, fundamental to research efforts in this area. Here, we review the various classes of flexible electronic devices (including power sources, sensors, circuits and individual components) and describe the basic principles of device mechanics. We then review techniques to characterize the deformation tolerance and durability of these flexible devices, and we catalogue and geometric designs that are intended to optimize electronic systems for maximum flexibility.",
"title": ""
},
{
"docid": "d89ba95eb3bd7aca4a7acb17be973c06",
"text": "An UWB elliptical slot antenna embedded with open-end slit on the tuning stub or parasitic strip on the aperture for achieving the band-notch characteristics has been proposed in this conference. Experimental results have also confirmed band-rejection capability for the proposed antenna at the desired band, as well as nearly omni-direction radiation features is still preserved. Finally, how to shrink the geometry dimensions of the UWB antenna will be investigated in the future.",
"title": ""
},
{
"docid": "c8f9d10de0d961e4ee14b6b118b5f89a",
"text": "Deep learning is having a transformative effect on how sensor data are processed and interpreted. As a result, it is becoming increasingly feasible to build sensor-based computational models that are much more robust to real-world noise and complexity than previously possible. It is paramount that these innovations reach mobile and embedded devices that often rely on understanding and reacting to sensor data. However, deep models conventionally demand a level of system resources (e.g., memory and computation) that makes them problematic to run directly on constrained devices. In this work, we present the DeepX toolkit (DXTK); an opensource collection of software components for simplifying the execution of deep models on resource-sensitive platforms. DXTK contains a number of pre-trained low-resource deep models that users can quickly adopt and integrate for their particular application needs. It also offers a range of runtime options for executing deep models on range of devices including both Android and Linux variants. But the heart of DXTK is a series of optimization techniques (viz. weight/sparse factorization, convolution separation, precision scaling, and parameter cleaning). Each technique offers a complementary approach to shaping system resource requirements, and is compatible with deep and convolutional neural networks. We hope that DXTK proves to be a valuable resource for the community, and accelerates the adoption and study of resource-constrained deep learning.",
"title": ""
},
{
"docid": "996f1743ca60efa05f5113a4459f8b61",
"text": "This paper presents a method for movie genre categorization of movie trailers, based on scene categorization. We view our approach as a step forward from using only low-level visual feature cues, towards the eventual goal of high-level seman- tic understanding of feature films. Our approach decom- poses each trailer into a collection of keyframes through shot boundary analysis. From these keyframes, we use state-of- the-art scene detectors and descriptors to extract features, which are then used for shot categorization via unsuper- vised learning. This allows us to represent trailers using a bag-of-visual-words (bovw) model with shot classes as vo- cabularies. We approach the genre classification task by mapping bovw temporally structured trailer features to four high-level movie genres: action, comedy, drama or horror films. We have conducted experiments on 1239 annotated trailers. Our experimental results demonstrate that exploit- ing scene structures improves film genre classification com- pared to using only low-level visual features.",
"title": ""
},
{
"docid": "6b4e1e45ef1b91b7694c62bd5d3cd9fc",
"text": "Recently, academia and law enforcement alike have shown a strong demand for data that is collected from online social networks. In this work, we present a novel method for harvesting such data from social networking websites. Our approach uses a hybrid system that is based on a custom add-on for social networks in combination with a web crawling component. The datasets that our tool collects contain profile information (user data, private messages, photos, etc.) and associated meta-data (internal timestamps and unique identifiers). These social snapshots are significant for security research and in the field of digital forensics. We implemented a prototype for Facebook and evaluated our system on a number of human volunteers. We show the feasibility and efficiency of our approach and its advantages in contrast to traditional techniques that rely on application-specific web crawling and parsing. Furthermore, we investigate different use-cases of our tool that include consensual application and the use of sniffed authentication cookies. Finally, we contribute to the research community by publishing our implementation as an open-source project.",
"title": ""
},
{
"docid": "22ecb164fb7a8bf4968dd7f5e018c736",
"text": "Unsupervised learning techniques in computer vision of ten require learning latent representations, such as low-dimensional linear and non-linear subspaces. Noise and outliers in the data can frustrate these approaches by obscuring the latent spaces. Our main goal is deeper understanding and new development of robust approaches for representation learning. We provide a new interpretation for existing robust approaches and present two specific contributions: a new robust PCA approach, which can separate foreground features from dynamic background, and a novel robust spectral clustering method, that can cluster facial images with high accuracy. Both contributions show superior performance to standard methods on real-world test sets.",
"title": ""
}
] | scidocsrr |
53e06b416a4f5369636047cea17f5d6d | Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy | [
{
"docid": "0704c17b0e0d6df371dd94c4fbcf7817",
"text": "Our ability to explain and predict other people's behaviour by attributing to them independent mental states, such as beliefs and desires, is known as having a 'theory of mind'. Interest in this very human ability has engendered a growing body of evidence concerning its evolution and development and the biological basis of the mechanisms underpinning it. Functional imaging has played a key role in seeking to isolate brain regions specific to this ability. Three areas are consistently activated in association with theory of mind. These are the anterior paracingulate cortex, the superior temporal sulci and the temporal poles bilaterally. This review discusses the functional significance of each of these areas within a social cognitive network.",
"title": ""
}
] | [
{
"docid": "eb888ba37e7e97db36c330548569508d",
"text": "Since the first online demonstration of Neural Machine Translation (NMT) by LISA (Bahdanau et al., 2014), NMT development has recently moved from laboratory to production systems as demonstrated by several entities announcing rollout of NMT engines to replace their existing technologies. NMT systems have a large number of training configurations and the training process of such systems is usually very long, often a few weeks, so role of experimentation is critical and important to share. In this work, we present our approach to production-ready systems simultaneously with release of online demonstrators covering a large variety of languages ( 12 languages, for32 language pairs). We explore different practical choices: an efficient and evolutive open-source framework; data preparation; network architecture; additional implemented features; tuning for production; etc. We discuss about evaluation methodology, present our first findings and we finally outline further work. Our ultimate goal is to share our expertise to build competitive production systems for ”generic” translation. We aim at contributing to set up a collaborative framework to speed-up adoption of the technology, foster further research efforts and enable the delivery and adoption to/by industry of use-case specific engines integrated in real production workflows. Mastering of the technology would allow us to build translation engines suited for particular needs, outperforming current simplest/uniform systems.",
"title": ""
},
{
"docid": "604619dd5f23569eaff40eabc8e94f52",
"text": "Understanding the causes and effects of species invasions is a priority in ecology and conservation biology. One of the crucial steps in evaluating the impact of invasive species is to map changes in their actual and potential distribution and relative abundance across a wide region over an appropriate time span. While direct and indirect remote sensing approaches have long been used to assess the invasion of plant species, the distribution of invasive animals is mainly based on indirect methods that rely on environmental proxies of conditions suitable for colonization by a particular species. The aim of this article is to review recent efforts in the predictive modelling of the spread of both plant and animal invasive species using remote sensing, and to stimulate debate on the potential use of remote sensing in biological invasion monitoring and forecasting. Specifically, the challenges and drawbacks of remote sensing techniques are discussed in relation to: i) developing species distribution models, and ii) studying life cycle changes and phenological variations. Finally, the paper addresses the open challenges and pitfalls of remote sensing for biological invasion studies including sensor characteristics, upscaling and downscaling in species distribution models, and uncertainty of results.",
"title": ""
},
{
"docid": "ce9421a7f8c1ae3a6b3983d7e0ff66c0",
"text": "Supporting Hebb's 1949 hypothesis of use-induced plasticity of the nervous system, our group found in the 1960s that training or differential experience induced neurochemical changes in cerebral cortex of the rat and regional changes in weight of cortex. Further studies revealed changes in cortical thickness, size of synaptic contacts, number of dendritic spines, and dendritic branching. Similar effects were found whether rats were assigned to differential experience at weaning (25 days of age), as young adults (105 days) or as adults (285 days). Enriched early experience improved performance on several tests of learning. Cerebral results of experience in an enriched environment are similar to results of formal training. Enriched experience and training appear to evoke the same cascade of neurochemical events in causing plastic changes in brain. Sufficiently rich experience may be necessary for full growth of species-specific brain characteristics and behavioral potential. Clayton and Krebs found in 1994 that birds that normally store food have larger hippocampi than related species that do not store. This difference develops only in birds given the opportunity to store and recover food. Research on use-induced plasticity is being applied to promote child development, successful aging, and recovery from brain damage; it is also being applied to benefit animals in laboratories, zoos and farms.",
"title": ""
},
{
"docid": "f9b11e55be907175d969cd7e76803caf",
"text": "In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex clique effects. We compare the multivariate Bernoulli model with existing graphical inference models – the Ising model and the multivariate Gaussian model, where only the pairwise interactions are considered. On the other hand, the multivariate Bernoulli distribution has an interesting property in that independence and uncorrelatedness of the component random variables are equivalent. Both the marginal and conditional distributions of a subset of variables in the multivariate Bernoulli distribution still follow the multivariate Bernoulli distribution. Furthermore, the multivariate Bernoulli logistic model is developed under generalized linear model theory by utilizing the canonical link function in order to include covariate information on the nodes, edges and cliques. We also consider variable selection techniques such as LASSO in the logistic model to impose sparsity structure on the graph. Finally, we discuss extending the smoothing spline ANOVA approach to the multivariate Bernoulli logistic model to enable estimation of non-linear effects of the predictor variables.",
"title": ""
},
{
"docid": "fa89dd854c37fe87d7164e43826fac7c",
"text": "Deployment of public wireless access points (also known as public hotspots) and the prevalence of portable computing devices has made it more convenient for people on travel to access the Internet. On the other hand, it also generates large privacy concerns due to the open environment. However, most users are neglecting the privacy threats because currently there is no way for them to know to what extent their privacy is revealed. In this paper, we examine the privacy leakage in public hotspots from activities such as domain name querying, web browsing, search engine querying and online advertising. We discover that, from these activities multiple categories of user privacy can be leaked, such as identity privacy, location privacy, financial privacy, social privacy and personal privacy. We have collected real data from 20 airport datasets in four countries and discover that the privacy leakage can be up to 68%, which means two thirds of users on travel leak their private information while accessing the Internet at airports. Our results indicate that users are not fully aware of the privacy leakage they can encounter in the wireless environment, especially in public WiFi networks. This fact can urge network service providers and website designers to improve their service by developing better privacy preserving mechanisms.",
"title": ""
},
{
"docid": "0800bfff6569d6d4f3eb00fae0ea1c11",
"text": "An 8-layer, 75 nm half-pitch, 3D stacked vertical-gate (VG) TFT BE-SONOS NAND Flash array is fabricated and characterized. We propose a buried-channel (n-type well) device to improve the read current of TFT NAND, and it also allows the junction-free structure which is particularly important for 3D stackable devices. Large self-boosting disturb-free memory window (6V) can be obtained in our device, and for the first time the “Z-interference” between adjacent vertical layers is studied. The proposed buried-channel VG NAND allows better X, Y pitch scaling and is a very attractive candidate for ultra high-density 3D stackable NAND Flash.",
"title": ""
},
{
"docid": "ef011f601c37f0d08c2567fe7e231324",
"text": "We live in a world were data are generated from a myriad of sources, and it is really cheap to collect and storage such data. However, the real benefit is not related to the data itself, but with the algorithms that are capable of processing such data in a tolerable elapse time, and to extract valuable knowledge from it. Therefore, the use of Big Data Analytics tools provide very significant advantages to both industry and academia. The MapReduce programming framework can be stressed as the main paradigm related with such tools. It is mainly identified by carrying out a distributed execution for the sake of providing a high degree of scalability, together with a fault-",
"title": ""
},
{
"docid": "393f3e89c038b10feebb5ccb4fa80d07",
"text": "Photo-excitation of certain semiconductors can lead to the production of reactive oxygen species that can inactivate microorganisms. The mechanisms involved are reviewed, along with two important applications. The first is the use of photocatalysis to enhance the solar disinfection of water. It is estimated that 750 million people do not have accessed to an improved source for drinking and many more rely on sources that are not safe. If one can utilize photocatalysis to enhance the solar disinfection of water and provide an inexpensive, simple method of water disinfection, then it could help reduce the risk of waterborne disease. The second application is the use of photocatalytic coatings to combat healthcare associated infections. Two challenges are considered, i.e., the use of photocatalytic coatings to give \"self-disinfecting\" surfaces to reduce the risk of transmission of infection via environmental surfaces, and the use of photocatalytic coatings for the decontamination and disinfection of medical devices. In the final section, the development of novel photocatalytic materials for use in disinfection applications is reviewed, taking account of materials, developed for other photocatalytic applications, but which may be transferable for disinfection purposes.",
"title": ""
},
{
"docid": "b59d728b6b2cc63ccff242730571db09",
"text": "Throughout the latter half of the past century cinema has played a significant role in the shaping of the core narratives of Australia. Films express and implicitly shape national images and symbolic representations of cultural fictions in which ideas about Indigenous identity have been embedded. In this paper, exclusionary practices in Australian narratives are analysed through examples of films representing Aboriginal identity. Through these filmic narratives the articulation, interrogation, and contestation of views about filmic representations of Aboriginal identity in Australia is illuminated. The various themes in the filmic narratives are examined in order to compare and contrast the ways in which the films display the operation of narrative closure and dualisms within the film texts.",
"title": ""
},
{
"docid": "37ef43a6ed0dcf0817510b84224d9941",
"text": "Contrast enhancement is one of the most important issues of image processing, pattern recognition and computer vision. The commonly used techniques for contrast enhancement fall into two categories: (1) indirect methods of contrast enhancement and (2) direct methods of contrast enhancement. Indirect approaches mainly modify histogram by assigning new values to the original intensity levels. Histogram speci\"cation and histogram equalization are two popular indirect contrast enhancement methods. However, histogram modi\"cation technique only stretches the global distribution of the intensity. The basic idea of direct contrast enhancement methods is to establish a criterion of contrast measurement and to enhance the image by improving the contrast measure. The contrast can be measured globally and locally. It is more reasonable to de\"ne a local contrast when an image contains textual information. Fuzzy logic has been found many applications in image processing, pattern recognition, etc. Fuzzy set theory is a useful tool for handling the uncertainty in the images associated with vagueness and/or imprecision. In this paper, we propose a novel adaptive direct fuzzy contrast enhancement method based on the fuzzy entropy principle and fuzzy set theory. We have conducted experiments on many images. The experimental results demonstrate that the proposed algorithm is very e!ective in contrast enhancement as well as in preventing over-enhancement. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f81dd0c86a7b45e743e4be117b4030c2",
"text": "Stock market prediction is of great importance for financial analysis. Traditionally, many studies only use the news or numerical data for the stock market prediction. In the recent years, in order to explore their complementary, some studies have been conducted to equally treat dual sources of information. However, numerical data often play a much more important role compared with the news. In addition, the existing simple combination cannot exploit their complementarity. In this paper, we propose a numerical-based attention (NBA) method for dual sources stock market prediction. Our major contributions are summarized as follows. First, we propose an attention-based method to effectively exploit the complementarity between news and numerical data in predicting the stock prices. The stock trend information hidden in the news is transformed into the importance distribution of numerical data. Consequently, the news is encoded to guide the selection of numerical data. Our method can effectively filter the noise and make full use of the trend information in news. Then, in order to evaluate our NBA model, we collect news corpus and numerical data to build three datasets from two sources: the China Security Index 300 (CSI300) and the Standard & Poor’s 500 (S&P500). Extensive experiments are conducted, showing that our NBA is superior to previous models in dual sources stock price prediction.",
"title": ""
},
{
"docid": "bd6ba64d14c8234e5ec2d07762a1165f",
"text": "Since their introduction in the early years of this century, Variable Stiffness Actuators (VSA) witnessed a sustain ed growth of interest in the research community, as shown by the growing number of publications. While many consider VSA very interesting for applications, one of the factors hindering their further diffusion is the relatively new conceptual structure of this technology. In choosing a VSA for his/her application, the educated practitioner, used to choosing robot actuators based on standardized procedures and uniformly presented data, would be confronted with an inhomogeneous and rather disorganized mass of information coming mostly from scientific publications. In this paper, the authors consider how the design procedures and data presentation of a generic VS actuator could be organized so as to minimize the engineer’s effort in choosing the actuator type and size that would best fit the application needs. The reader is led through the list of the most important parameters that will determine the ultimate performance of his/her VSA robot, and influence both the mechanical design and the controller shape. This set of parameters extends the description of a traditional electric actuator with quantities describing the capability of the VSA to change its output stiffness. As an instrument for the end-user, the VSA datasheet is intended to be a compact, self-contained description of an actuator that summarizes all the salient characteristics that the user must be aware of when choosing a device for his/her application. At the end some example of compiled VSA datasheets are reported, as well as a few examples of actuator selection procedures.",
"title": ""
},
{
"docid": "058db5e1a8c58a9dc4b68f6f16847abc",
"text": "Insurance companies must manage millions of claims per year. While most of these claims are non-fraudulent, fraud detection is core for insurance companies. The ultimate goal is a predictive model to single out the fraudulent claims and pay out the non-fraudulent ones immediately. Modern machine learning methods are well suited for this kind of problem. Health care claims often have a data structure that is hierarchical and of variable length. We propose one model based on piecewise feed forward neural networks (deep learning) and another model based on self-attention neural networks for the task of claim management. We show that the proposed methods outperform bagof-words based models, hand designed features, and models based on convolutional neural networks, on a data set of two million health care claims. The proposed self-attention method performs the best.",
"title": ""
},
{
"docid": "004a9fcd8a447f8601b901cff338f133",
"text": "Hybrid precoding has been recently proposed as a cost-effective transceiver solution for millimeter wave systems. While the number of radio frequency chains has been effectively reduced in existing works, a large number of high-precision phase shifters are still needed. Practical phase shifters are with coarsely quantized phases, and their number should be reduced to a minimum due to cost and power consideration. In this paper, we propose a novel hardware-efficient implementation for hybrid precoding, called the fixed phase shifter (FPS) implementation. It only requires a small number of phase shifters with quantized and fixed phases. To enhance the spectral efficiency, a switch network is put forward to provide dynamic connections from phase shifters to antennas, which is adaptive to the channel states. An effective alternating minimization algorithm is developed with closed-form solutions in each iteration to determine the hybrid precoder and the states of switches. Moreover, to further reduce the hardware complexity, a group-connected mapping strategy is proposed to reduce the number of switches. Simulation results show that the FPS fully-connected hybrid precoder achieves higher hardware efficiency with much fewer phase shifters than existing proposals. Furthermore, the group-connected mapping achieves a good balance between spectral efficiency and hardware complexity.",
"title": ""
},
{
"docid": "a423435c1dc21c33b93a262fa175f5c5",
"text": "The study investigated several teacher characteristics, with a focus on two measures of teaching experience, and their association with second grade student achievement gains in low performing, high poverty schools in a Mid-Atlantic state. Value-added models using three-level hierarchical linear modeling were used to analyze the data from 1,544 students, 154 teachers, and 53 schools. Results indicated that traditional teacher qualification characteristics such as licensing status and educational attainment were not statistically significant in producing student achievement gains. Total years of teaching experience was also not a significant predictor but a more specific measure, years of teaching experience at a particular grade level, was significantly associated with increased student reading achievement. We caution researchers and policymakers when interpreting results from studies that have used only a general measure of teacher experience as effects are possibly underestimated. Policy implications are discussed.",
"title": ""
},
{
"docid": "e53de7a588d61f513a77573b7b27f514",
"text": "In the past, there have been dozens of studies on automatic authorship classification, and many of these studies concluded that the writing style is one of the best indicators for original authorship. From among the hundreds of features which were developed, syntactic features were best able to reflect an author's writing style. However, due to the high computational complexity for extracting and computing syntactic features, only simple variations of basic syntactic features such as function words, POS(Part of Speech) tags, and rewrite rules were considered. In this paper, we propose a new feature set of k-embedded-edge subtree patterns that holds more syntactic information than previous feature sets. We also propose a novel approach to directly mining them from a given set of syntactic trees. We show that this approach reduces the computational burden of using complex syntactic structures as the feature set. Comprehensive experiments on real-world datasets demonstrate that our approach is reliable and more accurate than previous studies.",
"title": ""
},
{
"docid": "34472a26bc08f7763a1e5f64b5205fe4",
"text": "We propose Sentence Level Recurrent Topic Model (SLRTM), a new topic model that assumes the generation of each word within a sentence to depend on both the topic of the sentence and the whole history of its preceding words in the sentence. Different from conventional topic models that largely ignore the sequential order of words or their topic coherence, SLRTM gives full characterization to them by using a Recurrent Neural Networks (RNN) based framework. Experimental results have shown that SLRTM outperforms several strong baselines on various tasks. Furthermore, SLRTM can automatically generate sentences given a topic (i.e., topics to sentences), which is a key technology for real world applications such as personalized short text conversation.",
"title": ""
},
{
"docid": "94e386866e9e934d53405921963e483a",
"text": "Population pharmacokinetics is the study of pharmacokinetics at the population level, in which data from all individuals in a population are evaluated simultaneously using a nonlinear mixedeffects model. “Nonlinear” refers to the fact that the dependent variable (e.g., concentration) is nonlinearly related to the model parameters and independent variable(s). “Mixed-effects” refers to the parameterization: parameters that do not vary across individuals are referred to as “fixed effects,” parameters that vary across individuals are called “random effects.” There are five major aspects to developing a population pharmacokinetic model: (i) data, (ii) structural model, (iii) statistical model, (iv) covariate models, and (v) modeling software. Structural models describe the typical concentration time course within the population. Statistical models account for “unexplainable” (random) variability in concentration within the population (e.g., betweensubject, between-occasion, residual, etc.). Covariate models explain variability predicted by subject characteristics (covariates). Nonlinear mixed effects modeling software brings data and models together, implementing an estimation method for finding parameters for the structural, statistical, and covariate models that describe the data.1 A primary goal of most population pharmacokinetic modeling evaluations is finding population pharmacokinetic parameters and sources of variability in a population. Other goals include relating observed concentrations to administered doses through identification of predictive covariates in a target population. Population pharmacokinetics does not require “rich” data (many observations/subject), as required for analysis of single-subject data, nor is there a need for structured sampling time schedules. “Sparse” data (few observations/ subject), or a combination, can be used. We examine the fundamentals of five key aspects of population pharmacokinetic modeling together with methods for comparing and evaluating population pharmacokinetic models. DATA CONSIDERATIONS",
"title": ""
},
{
"docid": "e0f56e20d509234a45b0a91f8d6b91cb",
"text": "This paper describes recent research findings on resource sharing between trees and crops in the semiarid tropics and attempts to reconcile this information with current knowledge of the interactions between savannah trees and understorey vegetation by examining agroforestry systems from the perspective of succession. In general, productivity of natural vegetation under savannah trees increases as rainfall decreases, while the opposite occurs in agroforestry. One explanation is that in the savannah, the beneficial effects of microclimatic improvements (e.g. lower temperatures and evaporation losses) are greater in more xeric environments. Mature savannah trees have a high proportion of woody above-ground structure compared to foliage, so that the amount of water 'saved' (largely by reduction in soil evaporation) is greater than water 'lost' through transpiration by trees. By contrast, in agroforestry practices such as alley cropping where tree density is high, any beneficial effects of the trees on microclimate are negated by reductions in soil moisture due to increasing interception losses and tree transpiration. While investment in woody structure can improve the water economy beneath agroforestry trees, it inevitably reduces the growth rate of the trees and thus increases the time required for improved understorey productivity. Therefore, agroforesters prefer trees with more direct and immediate benefits to farmers. The greatest opportunity for simultaneous agroforestry practices is therefore to fill niches within the landscape where resources are currently under-utilised by crops. In this way, agroforestry can mimic the large scale patch dynamics and successional progression of a natural ecosystem.",
"title": ""
}
] | scidocsrr |
d8dc9f9ee05822e70db35ed133a192d8 | SUMMARIZATION USING AGGREGATE SIMILARITY | [
{
"docid": "7b755f9b49187e9a77efc4a2327c80ad",
"text": "In this paper, each document is represented by a weighted graph called a text relationship map. In the graph, each node represents a vector of nouns in a sentence, an undirected link connects two nodes if two sentences are semantically related, and a weight on the link is a value of the similarity between a pair of sentences. The vector similarity can be computed as the inner product between corresponding vector elements. The similarity is based on the word overlap between the corresponding sentences. The importance of a node on the map, called an aggregate similarity, is defined as the sum of weights on the links connecting it to other nodes on the map. In this paper, we present a Korean text summarization system using the aggregate similarity. To evaluate our system, we used two test collections: one collection (PAPER-InCon) consists of 100 papers in the domain of computer science; the other collection (NEWS) is composed of 105 articles in the newspapers. Under the compression rate of 20%, we achieved the recall of 46.6% (PAPER-InCon) and 30.5% (NEWS), and the precision of 76.9% (PAPER-InCon) and 42.3% (NEWS). Experiments show that our system outperforms two commercial systems.",
"title": ""
}
] | [
{
"docid": "5b392df7f03046bb8c15c8bdaa5a811f",
"text": "The inefficiency of separable wavelets in representing smooth edges has led to a great interest in the study of new 2-D transformations. The most popular criterion for analyzing these transformations is the approximation power. Transformations with near-optimal approximation power are useful in many applications such as denoising and enhancement. However, they are not necessarily good for compression. Therefore, most of the nearly optimal transformations such as curvelets and contourlets have not found any application in image compression yet. One of the most promising schemes for image compression is the elegant idea of directional wavelets (DIWs). While these algorithms outperform the state-of-the-art image coders in practice, our theoretical understanding of them is very limited. In this paper, we adopt the notion of rate-distortion and calculate the performance of the DIW on a class of edge-like images. Our theoretical analysis shows that if the edges are not “sharp,” the DIW will compress them more efficiently than the separable wavelets. It also demonstrates the inefficiency of the quadtree partitioning that is often used with the DIW. To solve this issue, we propose a new partitioning scheme called megaquad partitioning. Our simulation results on real-world images confirm the benefits of the proposed partitioning algorithm, promised by our theoretical analysis.",
"title": ""
},
{
"docid": "8d98529cd3fc92eba091e09ea223df4e",
"text": "Exploring small connected and induced subgraph patterns (CIS patterns, or graphlets) has recently attracted considerable attention. Despite recent efforts on computing the number of instances a specific graphlet appears in a large graph (i.e., the total number of CISes isomorphic to the graphlet), little attention has been paid to characterizing a node’s graphlet degree, i.e., the number of CISes isomorphic to the graphlet that include the node, which is an important metric for analyzing complex networks such as social and biological networks. Similar to global graphlet counting, it is challenging to compute node graphlet degrees for a large graph due to the combinatorial nature of the problem. Unfortunately, previous methods of computing global graphlet counts are not suited to solve this problem. In this paper we propose sampling methods to estimate node graphlet degrees for undirected and directed graphs, and analyze the error of our estimates. To the best of our knowledge, we are the first to study this problem and give a fast scalable solution. We conduct experiments on a variety of real-word datasets that demonstrate that our methods accurately and efficiently estimate node graphlet degrees for graphs with millions of edges.",
"title": ""
},
{
"docid": "d0690dcac9bf28f1fe6e2153035f898c",
"text": "The estimation of the homography between two views is a key step in many applications involving multiple view geometry. The homography exists between two views between projections of points on a 3D plane. A homography exists also between projections of all points if the cameras have purely rotational motion. A number of algorithms have been proposed for the estimation of the homography relation between two images of a planar scene. They use features or primitives ranging from simple points to a complex ones like non-parametric curves. Different algorithms make different assumptions on the imaging setup and what is known about them. This article surveys several homography estimation techniques from the literature. The essential theory behind each method is presented briefly and compared with the others. Experiments aimed at providing a representative analysis and comparison of the methods discussed are also presented in the paper.",
"title": ""
},
{
"docid": "910a416dc736ec3566583c57123ac87c",
"text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman husnu@ou.edu 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.",
"title": ""
},
{
"docid": "39208755abbd92af643d0e30029f6cc0",
"text": "The biomedical community makes extensive use of text mining technology. In the past several years, enormous progress has been made in developing tools and methods, and the community has been witness to some exciting developments. Although the state of the community is regularly reviewed, the sheer volume of work related to biomedical text mining and the rapid pace in which progress continues to be made make this a worthwhile, if not necessary, endeavor. This chapter provides a brief overview of the current state of text mining in the biomedical domain. Emphasis is placed on the resources and tools available to biomedical researchers and practitioners, as well as the major text mining tasks of interest to the community. These tasks include the recognition of explicit facts from biomedical literature, the discovery of previously unknown or implicit facts, document summarization, and question answering. For each topic, its basic challenges and methods are outlined and recent and influential work is reviewed.",
"title": ""
},
{
"docid": "d3281adf2e84a5bab8b03ab9ee8a2977",
"text": "The concept of Learning Health Systems (LHS) is gaining momentum as more and more electronic healthcare data becomes increasingly accessible. The core idea is to enable learning from the collective experience of a care delivery network as recorded in the observational data, to iteratively improve care quality as care is being provided in a real world setting. In line with this vision, much recent research effort has been devoted to exploring machine learning, data mining and data visualization methodologies that can be used to derive real world evidence from diverse sources of healthcare data to provide personalized decision support for care delivery and care management. In this chapter, we will give an overview of a wide range of analytics and visualization components we have developed, examples of clinical insights reached from these components, and some new directions we are taking.",
"title": ""
},
{
"docid": "2aade03834c6db2ecc2912996fd97501",
"text": "User contributions in the form of posts, comments, and votes are essential to the success of online communities. However, allowing user participation also invites undesirable behavior such as trolling. In this paper, we characterize antisocial behavior in three large online discussion communities by analyzing users who were banned from these communities. We find that such users tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users. Studying the evolution of these users from the moment they join a community up to when they get banned, we find that not only do they write worse than other users over time, but they also become increasingly less tolerated by the community. Further, we discover that antisocial behavior is exacerbated when community feedback is overly harsh. Our analysis also reveals distinct groups of users with different levels of antisocial behavior that can change over time. We use these insights to identify antisocial users early on, a task of high practical importance to community maintainers.",
"title": ""
},
{
"docid": "273a959e67ada56252f62b3c921b5d52",
"text": "Metric learning for music is an important problem for many music information retrieval (MIR) applications such as music generation, analysis, retrieval, classification and recommendation. Traditional music metrics are mostly defined on linear transformations of handcrafted audio features, and may be improper in many situations given the large variety of music styles and instrumentations. In this paper, we propose a deep neural network named Triplet MatchNet to learn metrics directly from raw audio signals of triplets of music excerpts with human-annotated relative similarity in a supervised fashion. It has the advantage of learning highly nonlinear feature representations and metrics in this end-to-end architecture. Experiments on a widely used music similarity measure dataset show that our method significantly outperforms three state-of-the-art music metric learning methods. Experiments also show that the learned features better preserve the partial orders of the relative similarity than handcrafted features.",
"title": ""
},
{
"docid": "08844c98f9d6b92f84d272516af64281",
"text": "This paper describes the synthesis of Dynamic Differential Logic to increase the resistance of FPGA implementations against Differential Power Analysis. The synthesis procedure is developed and a detailed description is given of how EDA tools should be used appropriately to implement a secure digital design flow. Compared with an existing technique to implement Dynamic Differential Logic on FPGA, the technique saves a factor 2 in slice utilization. Experimental results also indicate that a secure version of the AES encryption algorithm can now be implemented with a mere 50% increase in time delay and 90% increase in slice utilization when compared with a normal non-secure single ended implementation.",
"title": ""
},
{
"docid": "5bff5809ff470084497011a1860148e0",
"text": "A statistical meta-analysis of the technology acceptance model (TAM) as applied in various fields was conducted using 88 published studies that provided sufficient data to be credible. The results show TAM to be a valid and robust model that has been widely used, but which potentially has wider applicability. A moderator analysis involving user types and usage types was performed to investigate conditions under which TAM may have different effects. The study confirmed the value of using students as surrogates for professionals in some TAM studies, and perhaps more generally. It also revealed the power of meta-analysis as a rigorous alternative to qualitative and narrative literature review methods. # 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "10f4398671e3dab3d8414554535511dd",
"text": "As mobile devices become more and more popular, mobile gaming has emerged as a promising market with billion-dollar revenues. A variety of mobile game platforms and services have been developed around the world. A critical challenge for these platforms and services is to understand the churn behavior in mobile games, which usually involves churn at micro level (between an app and a specific user) and macro level (between an app and all its users). Accurate micro-level churn prediction and macro-level churn ranking will benefit many stakeholders such as game developers, advertisers, and platform operators. In this paper, we present the first large-scale churn analysis for mobile games that supports both micro-level churn prediction and macrolevel churn ranking. For micro-level churn prediction, in view of the common limitations of the state-of-the-art methods built upon traditional machine learning models, we devise a novel semi-supervised and inductive embedding model that jointly learns the prediction function and the embedding function for user-app relationships. We model these two functions by deep neural networks with a unique edge embedding technique that is able to capture both contextual information and relationship dynamics. We also design a novel attributed random walk technique that takes into consideration both topological adjacency and attribute similarities. To address macro-level churn ranking, we propose to construct a relationship graph with estimated micro-level churn probabilities as edge weights and adapt link analysis algorithms on the graph. We devise a simple algorithm SimSum and adapt two more advanced algorithms PageRank and HITS. The performance of our solutions for the two-level churn analysis problems is evaluated on real-world data collected from the Samsung Game Launcher platform. The data includes tens of thousands of mobile games and hundreds of millions of user∗ This work was done during the authors’ internships at Samsung Research America Received xxx Revised xxx Accepted xxx ar X iv :1 90 1. 06 24 7v 1 [ cs .L G ] 1 4 Ja n 20 19",
"title": ""
},
{
"docid": "424a0f5f4a725b85fabb8c7ee19c6e3c",
"text": "The data on dental variability in natural populations of sibling species of common voles (“arvalis” group, genus Microtus) from European and Asian parts of the species’ ranges are summarized using a morphotype-based approach to analysis of dentition. Frequency distributions of the first lower (m1) and the third upper (M3) molar morphotypes are analyzed in about 65 samples of M. rossiaemeridionalis and M. arvalis represented by arvalis and obscurus karyotypic forms. Because of extreme similarity of morphotype dental patterns in the taxa studied, it is impossible to use molar morphotype frequencies for species identification. However, a morphotype-based approach to analysis of dental variability does allow analysis of inter-species comparisons from an evolutionary standpoint. Three patterns of dental complexity are established in the taxa studied: simple, basic (the most typical within the ranges of both species), and complex. In M. rossiaemeridionalis and in M. arvalis obscurus only the basic pattern of dentition occurs. In M. arvalis arvalis, both simple and basic dental patterns are found. Analysis of association of morphotype dental patterns with geographical and environmental variables reveals an increase in the number of complex molars with longitude and latitude: in M. arvalis the pattern of molar complication is more strongly related to longitude, and in M. rossiaemeridionalis—to latitude. Significant decrease in incidence of simple molars with climate continentality and increasing aridity is found in M. arvalis. The simple pattern of dentition is found in M. arvalis arvalis in Spain, along the Atlantic coast of France and on islands thereabout, in northeastern Germany and Kirov region in European Russia. Hypotheses to explain the distribution of populations with different dental patterns within the range of M. arvalis sensu stricto are discussed.",
"title": ""
},
{
"docid": "e22378cc4ae64e9c3abbd4b308198fb6",
"text": "Knowledge about the argumentative structure of scientific articles can, amongst other things, be used to improve automatic abstracts. We argue that the argumentative structure of scientific discourse can be automatically detected because reasordng about problems, research tasks and solutions follows predictable patterns. Certain phrases explicitly mark the rhetorical status (communicative function) of sentences with respect to the global argumentative goal. Examples for such meta-diacaurse markers are \"in this paper, we have p r e s e n t e d . . . \" or \"however, their method fails to\". We report on work in progress about recognizing such meta-comments automatically in research articles from two disciplines: computational linguistics and medicine (cardiology). 1 M o t i v a t i o n We are interested in a formal description of the document s t ructure of scientific articles from different disciplines. Such a description could be of practical use for many applications in document management; our specific mot ivat ion for detecting document structure is qual i ty improvement in automatic abstracting. Researchem in the field of automatic abstracting largely agree that it is currently not technically feasible to create automatic abstracts based on full text unders tanding (Sparck Jones 1994). As a result, many researchers have turned to sentence extraction (Kupiec, Pedersen, & Chen 1995; Brandow, Mitze, & Rau 1995; Hovy & Lin 1997). Sentence extraction, which does not involve any deep analysis, has the huge advantage of being robust with respect to individual writing style, discipline and text type (genre). Instead of producing a b s t r a c t , this results produces only extracts: documen t surrogates consisting of a number of sentences selected verbat im from the original text. We consider a concrete document retrieval (DR) scenario in which a researcher wants to select one or more scientific articles from a large scientific database (or even f rom the Internet) for further inspection. The ma in task for the searcher is relevance decision for each paper: she needs to decide whether or not to spend more t ime on a paper (read or skim-read it), depending on how useful it presumably is to her current information needs. Traditional sentence extracts can be used as rough-and-ready relevance indicators for this task, but they are not doing a great job at representing the contents of the original document: searchers often get the wrong idea about what the text is about. Much of this has to do with the fact that extracts are typically incoherent texts, consisting of potential ly unrelated sentences which have been taken out of their context. Crucially, extracts have no handle at revealing the text 's logical and semantic organisation. More sophisticated, user-tailored abstracts could help the searcher make a fast, informed relevance decision by taking factors like the searcher's expertise and current information need into account. If the searcher is dealing with research she knows well, her information needs might be quite concrete: during the process of writing her own paper she might want to find research which supports her own claims, find out if there are contradictory results to hers in the literature, or compare her results to those of researchers using a similar methodology. A different information need arises when she wants to gain an overview of a new research area as an only \"partially informed user\" in this field (Kircz 1991) she will need to find out about specific research goals, the names of the researchers who have contributed the main research ideas in a given time period, along with information of methodology and results in this research field. There are new functions these abstracts could fulfil. In order to make an informed relevance decision, the searcher needs to judge differences and similarities between papers, e.g. how a given paper relates to similar papers with respect to research goals or methodology, so that she can place the research described in a given paper in the larger picture of the field, a function we call navigation between research articles. A similar operation is navigation within a paper, which supports searchers in non-linear reading and allows them to find relevant information faster, e.g. numerical results. We believe that a document surrogate that aims at supporting such functions should characterize research articles in terms of the problems, research tasks and",
"title": ""
},
{
"docid": "59c757aa28dcb770ecf5b01dc26ba087",
"text": "Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google’s manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).",
"title": ""
},
{
"docid": "fc779c615e0661c6247998532fee55cc",
"text": "This paper presents a challenge to the community: given a large corpus of written text aligned to its normalized spoken form, train an RNN to learn the correct normalization function. We present a data set of general text where the normalizations were generated using an existing text normalization component of a text-to-speech system. This data set will be released open-source in the near future. We also present our own experiments with this data set with a variety of different RNN architectures. While some of the architectures do in fact produce very good results when measured in terms of overall accuracy, the errors that are produced are problematic, since they would convey completely the wrong message if such a system were deployed in a speech application. On the other hand, we show that a simple FST-based filter can mitigate those errors, and achieve a level of accuracy not achievable by the RNN alone. Though our conclusions are largely negative on this point, we are actually not arguing that the text normalization problem is intractable using an pure RNN approach, merely that it is not going to be something that can be solved merely by having huge amounts of annotated text data and feeding that to a general RNN model. Andwhenwe open-source our data, we will be providing a novel data set for sequenceto-sequence modeling in the hopes that the the community can find better solutions.",
"title": ""
},
{
"docid": "b8c683c194792a399f9c12fdf7e9f0cd",
"text": "The rise of Social Media services in the last years has created huge streams of information that can be very valuable in a variety of scenarios. What precisely these scenarios are and how the data streams can efficiently be analyzed for each scenario is still largely unclear at this point in time and has therefore created significant interest in industry and academia. In this paper, we describe a novel algorithm for geo-spatial event detection on Social Media streams. We monitor all posts on Twitter issued in a given geographic region and identify places that show a high amount of activity. In a second processing step, we analyze the resulting spatio-temporal clusters of posts with a Machine Learning component in order to detect whether they constitute real-world events or not. We show that this can be done with high precision and recall. The detected events are finally displayed to a user on a map, at the location where they happen and while they happen.",
"title": ""
},
{
"docid": "a41c9650da7ca29a51d310cb4a3c814d",
"text": "The analysis of resonant-type antennas based on the fundamental infinite wavelength supported by certain periodic structures is presented. Since the phase shift is zero for a unit-cell that supports an infinite wavelength, the physical size of the antenna can be arbitrary; the antenna's size is independent of the resonance phenomenon. The antenna's operational frequency depends only on its unit-cell and the antenna's physical size depends on the number of unit-cells. In particular, the unit-cell is based on the composite right/left-handed (CRLH) metamaterial transmission line (TL). It is shown that the CRLH TL is a general model for the required unit-cell, which includes a nonessential series capacitance for the generation of an infinite wavelength. The analysis and design of the required unit-cell is discussed based upon field distributions and dispersion diagrams. It is also shown that the supported infinite wavelength can be used to generate a monopolar radiation pattern. Infinite wavelength resonant antennas are realized with different number of unit-cells to demonstrate the infinite wavelength resonance",
"title": ""
},
{
"docid": "fbfbb339657f2a0a97f8a65dfb99ffbc",
"text": "This work describes a novel technique of designing a high gain low noise CMOS instrumentation amplifier for biomedical applications like ECG signal processing. A three opamp instrumentation amplifier have been designed by using two simple op-amps at the two input stages and a folded cascode opamp at the output stage. Both op-amps at the input and output are 2-stage. Most of the previous or earlier designed op-amp in literature uses same type of op-amp at the input and output stages of instrumentation amplifier. By using folded cascode op-amp at the output, we had achieved significant improvement in gain and CMRR. Transistors sizing plays a major role in achieving high gain and CMRR. To achieve a desirable common mode rejection ratio (CMRR), Gain and other performance metrics, selection of most appropriable op-amp circuit topologies & optimum transistor sizing was the main criteria for designing of instrumentation amplifier for biomedical applications. The complete instrumentation amplifier design is simulated using Cadence Spectre tool and layout is designed and simulated in Cadence Layout editor at 0.18μm CMOS technology. Each of the input two stage op-amp provides a gain and CMRR of 45dB and 72dB respectively. The output two stage folded cascode amplifier provides a CMRR of 92dB and a gain of 82dB. The design achieves an overall CMRR and gain of 92dB and 67db respectively. The overall power consumed by instrumentation amplifier is 263μW which is suitable for biomedical signal processing applications.",
"title": ""
},
{
"docid": "5932b3f1f0523f07190855e51abc04b9",
"text": "This paper proposes an optimization algorithm based on how human fight and learn from each duelist. Since this algorithm is based on population, the proposed algorithm starts with an initial set of duelists. The duel is to determine the winner and loser. The loser learns from the winner, while the winner try their new skill or technique that may improve their fighting capabilities. A few duelists with highest fighting capabilities are called as champion. The champion train a new duelists such as their capabilities. The new duelist will join the tournament as a representative of each champion. All duelist are re-evaluated, and the duelists with worst fighting capabilities is eliminated to maintain the amount of duelists. Two optimization problem is applied for the proposed algorithm, together with genetic algorithm, particle swarm optimization and imperialist competitive algorithm. The results show that the proposed algorithm is able to find the better global optimum and faster iteration. Keywords—Optimization; global, algorithm; duelist; fighting",
"title": ""
},
{
"docid": "bfa87a59940f6848d8d5b53b89c16735",
"text": "The over-segmentation of images into atomic regions has become a standard and powerful tool in Vision. Traditional superpixel methods, that operate at the pixel level, cannot directly capture the geometric information disseminated into the images. We propose an alternative to these methods by operating at the level of geometric shapes. Our algorithm partitions images into convex polygons. It presents several interesting properties in terms of geometric guarantees, region compactness and scalability. The overall strategy consists in building a Voronoi diagram that conforms to preliminarily detected line-segments, before homogenizing the partition by spatial point process distributed over the image gradient. Our method is particularly adapted to images with strong geometric signatures, typically man-made objects and environments. We show the potential of our approach with experiments on large-scale images and comparisons with state-of-the-art superpixel methods.",
"title": ""
}
] | scidocsrr |
df90844da0f4c9240cd051235a7ce7d4 | Small-Signal Model-Based Control Strategy for Balancing Individual DC Capacitor Voltages in Cascade Multilevel Inverter-Based STATCOM | [
{
"docid": "bd2041c4fa88cbdc73c68cf2586df849",
"text": "This paper presents a three-phase transformerless cascade pulsewidth-modulation (PWM) static synchronous compensator (STATCOM) intended for installation on industrial and utility power distribution systems. It proposes a control algorithm that devotes itself not only to meeting the demand of reactive power but also to voltage balancing of multiple galvanically isolated and floating dc capacitors. The control algorithm based on a phase-shifted carrier modulation strategy is prominent in having no restriction on the cascade number. Experimental waveforms verify that a 200-V 10-kVA cascade PWM STATCOM with star configuration has the capability of inductive to capacitive (or capacitive to inductive) operation at the rated reactive power of 10 kVA within 20 ms while keeping the nine dc mean voltages controlled and balanced even during the transient state.",
"title": ""
},
{
"docid": "264fef3aa71df1f661f2b94461f9634c",
"text": "This paper presents a new control method for cascaded connected H-bridge converter-based static compensators. These converters have classically been commutated at fundamental line frequencies, but the evolution of power semiconductors has allowed the increase of switching frequencies and power ratings of these devices, permitting the use of pulsewidth modulation techniques. This paper mainly focuses on dc-bus voltage balancing problems and proposes a new control technique (individual voltage balancing strategy), which solves these balancing problems, maintaining the delivered reactive power equally distributed among all the H-bridges of the converter.",
"title": ""
}
] | [
{
"docid": "aa60d0d73efdf21adcc95c6ad7a7dbc3",
"text": "While hardware obfuscation has been used in industry for many years, very few scientific papers discuss layout-level obfuscation. The main aim of this paper is to start a discussion about hardware obfuscation in the academic community and point out open research problems. In particular, we introduce a very flexible layout-level obfuscation tool that we use as a case study for hardware obfuscation. In this obfuscation tool, a small custom-made obfuscell is used in conjunction with a standard cell to build a new obfuscated standard cell library called Obfusgates. This standard cell library can be used to synthesize any HDL code with standard synthesis tools, e.g. Synopsis Design Compiler. However, only obfuscating the functionality of individual gates is not enough. Not only the functionality of individual gates, but also their connectivity, leaks important important information about the design. In our tool we therefore designed the obfuscation gates to include a large number of \"dummy wires\". Due to these dummy wires, the connectivity of the gates in addition to their logic functionality is obfuscated. We argue that this aspect of obfuscation is of great importance in practice and that there are many interesting open research questions related to this.",
"title": ""
},
{
"docid": "4fb93d604733837782085ecb19b49621",
"text": "Text in many domains involves a significant amount of named entities. Predicting the entity names is often challenging for a language model as they appear less frequent on the training corpus. In this paper, we propose a novel and effective approach to building a discriminative language model which can learn the entity names by leveraging their entity type information. We also introduce two benchmark datasets based on recipes and Java programming codes, on which we evaluate the proposed model. Experimental results show that our model achieves 52.2% better perplexity in recipe generation and 22.06% on code generation than the stateof-the-art language models.",
"title": ""
},
{
"docid": "ddb36948e400c970309bd0886bfcfccb",
"text": "1 Introduction \"S pace\" and \"place\" are familiar words denoting common \"Sexperiences. We live in space. There is no space for an-< • / other building on the lot. The Great Plains look spacious. Place is security, space is freedom: we are attached to the one and long for the other. There is no place like home. What is home? It is the old homestead, the old neighborhood, home-town, or motherland. Geographers study places. Planners would like to evoke \"a sense of place.\" These are unexceptional ways of speaking. Space and place are basic components of the lived world; we take them for granted. When we think about them, however, they may assume unexpected meanings and raise questions we have not thought to ask. What is space? Let an episode in the life of the theologian Paul Tillich focus the question so that it bears on the meaning of space in experience. Tillich was born and brought up in a small town in eastern Germany before the turn of the century. The town was medieval in character. Surrounded by a wall and administered from a medieval town hall, it gave the impression of a small, protected, and self-contained world. To an imaginative child it felt narrow and restrictive. Every year, however young Tillich was able to escape with his family to the Baltic Sea. The flight to the limitless horizon and unrestricted space 3 4 Introduction of the seashore was a great event. Much later Tillich chose a place on the Atlantic Ocean for his days of retirement, a decision that undoubtedly owed much to those early experiences. As a boy Tillich was also able to escape from the narrowness of small-town life by making trips to Berlin. Visits to the big city curiously reminded him of the sea. Berlin, too, gave Tillich a feeling of openness, infinity, unrestricted space. 1 Experiences of this kind make us ponder anew the meaning of a word like \"space\" or \"spaciousness\" that we think we know well. What is a place? What gives a place its identity, its aura? These questions occurred to the physicists Niels Bohr and Werner Heisenberg when they visited Kronberg Castle in Denmark. Bohr said to Heisenberg: Isn't it strange how this castle changes as soon as one imagines that Hamlet lived here? As scientists we believe that a castle consists only of stones, and admire the way the …",
"title": ""
},
{
"docid": "541440dc7497e14876642e837c2207c7",
"text": "We propose several simple approaches to training deep neural networks on data with noisy labels. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark, showing how additional noisy data can improve state-of-the-art recognition models. 1 Introduction In recent years, deep learning methods have shown impressive results on image classification tasks. However, this achievement is only possible because of large amount of labeled images. Labeling images by hand is a laborious task and takes a lot of time and money. An alternative approach is to generate labels automatically. This includes user tags from social web sites and keywords from image search engines. Considering the abundance of such noisy labels, it is important to find a way to utilize them in deep learning. Unfortunately, those labels are very noisy and unlikely to help training deep networks without additional tricks. Our goal is to study the effect label noise on deep networks, and explore simple ways of improvement. We focus on the robustness of deep networks instead of data cleaning methods, which are well studied and can be used together with robust models directly. Although many noise robust classifiers are proposed so far, there are not many works on training deep networks on noisy labeled data, especially on large scale datasets. Our contribution in this paper is a novel way of modifying deep learning models so they can be effectively trained on data with high level of label noise. The modification is simply done by adding a linear layer on top of the softmax layer, which makes it easy to implement. This additional layer changes the output from the network to give better match to the noisy labels. Also, it is possible to learn the noise distribution directly from the noisy data. Using real-world image classification tasks, we demonstrate that the model actually works very well in practice. We even show that random images without labels (complete noise) can improve the classification performance. 2 Related Work In any classification model, degradation of performance is inevitable when there is noise in training labels [13, 15]. A simple approach to handle noisy labels is a data preprocessing stage, where labels suspected to be incorrect are removed or corrected [1, 3]. However, a weakness of this approach is the difficulty of distinguishing informative hard samples from harmful mislabeled ones [6]. Instead, in this paper, we focus on models robust to presence of label noise. 1 ar X iv :1 40 6. 20 80 v1 [ cs .C V ] 9 J un 2 01 4 The effect of label noise is well studied in common classifiers (e.g., SVMs, kNN, logistic regression), and their label noise robust variants have been proposed. See [5] for comprehensive review. A more recent work [2] proposed a generic unbiased estimator for binary classification with noisy labels. They employ a surrogate cost function that can be expressed by a weighted sum of the original cost functions, and gave theoretical bounds on the performance. In this paper, we will also consider this idea and extend it multiclass. A cost function similar to ours is proposed in [2] to make logistic regression robust to label noise. They also proposed a learning algorithm for noise parameters. However, we consider deep networks, a more powerful and complex classifier than logistic regression, and propose a different learning algorithm for noise parameters that is more suited for back-propagation training. Considering the recent success of deep learning [8, 17, 16], there are very few works about deep learning from noisy labels. In [11, 9], noise modeling is incorporated to neural network in the same way as our proposed model. However, only binary classification is considered in [11], and [9] assumed symmetric label noise (noise is independent of the true label). Therefore, there is only a single noise parameter, which can be tuned by cross-validation. In this paper, we consider multiclass classification and assume more realistic asymmetric label noise, which makes it impossible to use cross-validation to adjust noise parameters (there can be a million parameters). 3 Approach In this paper, we consider two approaches to make an existing classification model, which we call the base model, robust against noisy labels: bottom-up and top-down noise models. In the bottomup model, we add an additional layer to the model that changes the label probabilities output by the base model so it would better match to noisy labels. Top-down model, on other hand, changes given noisy labels before feeding them to the base model. Both models require a noise model for training, so we will give an easy way to estimate noise levels using clean data. Also, it is possible to learn noise distribution from noisy data in the bottom-up model. Although only deep neural networks are used in our experiments, the both approaches can be applied to any classification model with a cross entropy cost. 3.1 Bottom-up Noise Model We assume that label noise is random conditioned on the true class, but independent of the input x (see [10] for more detail about this type of noise). Based on this assumption, we add an additional layer to a deep network (see Figure 1) that changes its output so it would better match to the noisy labels. The weights of this layer corresponds to the probabilities of a certain class being mislabeled to another class. Because those probabilities are often unknown, we will show how estimate them from additional clean data, or from the noisy data itself. Let D be the true data distribution generating correctly labeled samples (x, y∗), where x is an input vector and y∗ is the corresponding label. However, we only observe noisy labeled samples (x, ỹ) that generated from a some noisy distribution D̃. We assume that the label noise is random conditioned on the true labels. Then, the noise distribution can be parameterized by a matrix Q = {qji}: qji := p(ỹ = j|y∗ = i). Q is a probability matrix because its elements are positive and each column sums to one. The probability of input x being labeled as j in D̃ is given by p(ỹ = j|x, θ) = ∑ i p(ỹ = j|y∗ = i)p(y∗ = i|x) = ∑ i qjip(y ∗ = i|x, θ). (1) where p(y∗ = i|x, θ) is the probabilistic output of the base model with parameters θ. If the true noise distribution is known, we can modify this for noisy labeled data. During training, Q will act as an adapter that transforms the model’s output to better match the noisy labels. Deep\t\r network Learnin from noisy labels in deep neural networks Sainbayar Sukhbaatar Dept. of Computer Science, NYU, 715 Broadway, New Y rk, NY 10003 sainbar@cs.nyu.edu Rob Fergus Courant Institute, NYU, 715 Broadway, New York, NY 10003 fergus@cs.nyu.edu",
"title": ""
},
{
"docid": "c460ac78bb06e7b5381506f54200a328",
"text": "Efficient virtual machine (VM) management can dramatically reduce energy consumption in data centers. Existing VM management algorithms fall into two categories based on whether the VMs' resource demands are assumed to be static or dynamic. The former category fails to maximize the resource utilization as they cannot adapt to the dynamic nature of VMs' resource demands. Most approaches in the latter category are heuristical and lack theoretical performance guarantees. In this work, we formulate dynamic VM management as a large-scale Markov Decision Process (MDP) problem and derive an optimal solution. Our analysis of real-world data traces supports our choice of the modeling approach. However, solving the large-scale MDP problem suffers from the curse of dimensionality. Therefore, we further exploit the special structure of the problem and propose an approximate MDP-based dynamic VM management method, called MadVM. We prove the convergence of MadVM and analyze the bound of its approximation error. Moreover, MadVM can be implemented in a distributed system, which should suit the needs of real data centers. Extensive simulations based on two real-world workload traces show that MadVM achieves significant performance gains over two existing baseline approaches in power consumption, resource shortage and the number of VM migrations. Specifically, the more intensely the resource demands fluctuate, the more MadVM outperforms.",
"title": ""
},
{
"docid": "ec11d0b10af5507c18d918edb42a9ab8",
"text": "Traditional way of manual meter reading was not only waste of human and material resources, but also very inconvenient. Especially with the emergence of a number of high residential in recent years, this traditional way of water management was obviously inefficient. Cable automatic meter reading system is very vulnerable and it needs a heavy workload of construction wiring. In this paper, based on the study of existed water meters, a kind of design schema of wireless smart water meter was introduced. In the system, the main communication way is based on Zigbee technology. This kind of design schema is appropriate for the modern water management and the efficiency can be improved.",
"title": ""
},
{
"docid": "9aa91978651f42157b42a55b936a9bc0",
"text": "Suicide, the eighth leading cause of death in the United States, accounts for more than 30 000 deaths per year. The total number of suicides has changed little over time. For example, 27 596 U.S. suicides occurred in 1981, and 30 575 occurred in 1998. Between 1981 and 1998, the age-adjusted suicide rate decreased by 9.3%from 11.49 suicides per 100 000 persons to 10.42 suicides per 100 000 persons (www.cdc.gov/ncipc/wisqars). The suicide rate in men (18.7 suicides per 100 000 men in 1998) is more than four times that in women (4.4 suicides per 100 000 women in 1998). In females, suicide rates remain relatively constant beginning in the midteens. In males, suicide rates are stable from the late teenage years until the late 70s, when the rate increases substantially (to 41 suicides per 100 000 persons annually in men 75 to 84 years of age). White men have a twofold higher risk for suicide compared with African-American men (20.2 vs. 10.9 suicides, respectively, each year per 100 000 men). The risk in white women is double that of women in U.S. nonwhite ethnic/racial minority groups (4.9 vs. 2.4 per 100 000 women each year). In countries other than the United States, the most recently reported rates of suicide vary widely, ranging from less than 1 in 100 000 persons per year in Syria, Egypt, and Lebanon to more than 40 in 100 000 persons per year in many former Soviet republics (www.who.int/whosis). Over the past century, Hungary had the world's highest reported rate of suicide; the reason is unknown. Of note, the reported rates of suicide in first-generation immigrants to Australia tend to be more similar to rates in their native country than to rates in their country of current residence (1, 2); these figures indicate the influence of culture and ethnicity on suicide rates. Suicide is the third leading cause of death in persons 15 to 34 years of age. The U.S. suicide rate in all youths decreased by 18% from 1990 to 1998 (www.cdc.gov/ncipc/wisqars) despite a 3.6-fold increase from 1992 to 1995 in white men, a 4.7-fold increase in African-American men, and a 2.1-fold increase in African-American women. Worldwide, from 1950 to 1995 in persons of all ages, suicide rates increased by approximately 35% in men and only approximately 10% in women (www.who.int/whosis). The reasons for the differences in rates among age, sex, and ethnic groups and the change in rates since the 1950s are unknown. Suicide is generally a complication of a psychiatric disorder. More than 90% of suicide victims have a diagnosable psychiatric illness (3-7), and most persons who attempt suicide have a psychiatric disorder. The most common psychiatric conditions associated with suicide or serious suicide attempt are mood disorders (3-8). Investigators have proposed many models to explain or predict suicide (9). One such explanatory and predictive model is the stress-diathesis model (10). One stressor is almost invariably the onset or acute worsening of a psychiatric disorder, but other types of stressors, such as a psychosocial crisis, can also contribute. The diathesis for suicidal behavior includes a combination of factors, such as sex, religion, familial and genetic components, childhood experiences, psychosocial support system, availability of highly lethal suicide methods, and various other factors, including cholesterol level. In this review, I describe the neurobiological correlates of the stressors and the diathesis. Literature for this review came from searches of the MEDLINE database (1996 to the present) and from literature cited in review articles. The factors that determined inclusion in this review were superiority of research design (use of psychiatric controls, quality of psychometrics, diagnostic information on the study sample, and definition of suicidal behavior; prospective studies were favored), adequate representation of major points of view, and pivotal reviews of key subjects. What is Suicidal Behavior? Suicidal behavior refers to the most clear-cut and unambiguous act of completed suicide but also includes a heterogeneous spectrum of suicide attempts that range from highly lethal attempts (in which survival is the result of good fortune) to low-lethality attempts that occur in the context of a social crisis and contain a strong element of an appeal for help (11). Suicidal ideation without action is more common than suicidal behavior (11). In most countries, men have a higher reported rate of completed suicide, whereas women have a higher rate of attempted suicide (12). Men tend to use means that are more lethal, plan the suicide attempt more carefully, and avoid detection. In contrast, women tend to use less lethal means of suicide, which carry a higher chance of survival, and they more commonly express an appeal for help by conducting the attempt in a manner that favors discovery and rescue (13, 14). Thus, suicidal behavior has two dimensions (13). The first dimension is the degree of medical lethality or damage resulting from the suicide attempt. The second dimension relates to suicidal intent and measures the degree of preparation, the desire to die versus the desire to live, and the chances of discovery. Intent and lethality are correlated with each other and with biological abnormalities associated with suicide risk (13, 15, 16). The clinical profiles of suicide attempts and completions overlap (17). Suicide attempters who survive very lethal attempts, which are known as failed suicides, have the same clinical and psychosocial profile as suicide completers (11, 17). The study and prevention of failed suicides are probably most relevant to completed suicides. Somewhat related to suicide attempters are patients with serious medical illnesses who do not adhere to treatment regimensfor example, diabetic patients who do not take prescribed medications to control blood sugar levels or persons who engage in high-risk behaviors, such as sky diving or mountaineering. These groups warrant further study to determine whether they have psychopathology that overlaps with the psychopathology of suicide attempters. Intent and lethality are also related to the risk for future completed suicide (13). Subsequent suicide attempts may involve a greater degree of intent and lethality (18), and a previous suicide attempt is an important predictor of future suicide (19, 20) or suicide attempt (21). Careful inquiry about past suicide attempts is an essential part of risk assessment in psychiatric patients. Because more than two thirds of suicides occur with the first attempt, history of a suicide attempt is insufficient to predict most suicides; additional risk factors must be considered. Clinical Correlates of Suicidal Behavior Psychological autopsy studies involve review of all available medical records and interviews with family members and friends of the deceased. This method generates valid psychiatric diagnoses (22), and most studies have found that more than 90% of suicide victims had a psychiatric disorder at the time of suicide (3-6, 23). That percentage may be underestimated because accurate data depend on finding informants who knew the victim's state of mind in the weeks before death. Approximately 60% of all suicides occur in persons with a mood disorder (3, 4, 6, 7), and the rest occur in persons with various other psychiatric conditions, including schizophrenia; alcoholism (24); substance abuse (5, 25, 26); and personality disorders (27), such as borderline or antisocial personality disorder (23, 28-30). Lifetime mortality from suicide in discharged hospital populations is approximately 20% in persons with bipolar disorder (manic depression), 15% in persons with unipolar depression, 10% in persons with schizophrenia, 18% in persons with alcoholism, and 5% to 10% in persons with both borderline and antisocial personality disorders (29-33). These personality disorders are characterized by emotional liability, aggression, and impulsivity. The lifetime mortality due to suicide is lower in general psychiatric populations (34, 35). Although suicide is generally a complication of a psychiatric disorder, most persons with a psychiatric disorder never attempt suicide. Even the higher-risk groups, such as persons with unipolar or bipolar mood disorders, have a lifetime suicide attempt rate less than 50%. Thus, persons with these psychiatric disorders who die by suicide differ from those who never attempt suicide. To understand those differences, investigators have compared persons who have attempted suicide and those who have not by matching psychiatric diagnosis and comparable objective severity of illness (10). Suicide attempters differ in two important ways from nonattempters with the same psychiatric disorder. First, they experience more subjective depression and hopelessness and, in particular, have more severe suicidal ideation. They also perceive fewer reasons for living despite having the same objective severity of psychiatric illness and a similar number of adverse life events. One possible explanation for the greater sense of hopelessness and greater number of suicidal ideations is a predisposition for such feelings in the face of illness or other life stressor. The pressure of greater lifetime aggressivity and impulsivity suggests a second diathesis element in suicidal patients. These individuals not only are more aggressive toward others and their environment but are more impulsive in other ways that involve, for example, relationships or personal decisions about a job or purchases. A propensity for more severe suicidal ideation and a greater likelihood of acting on powerful feelings combine to place some patients at greater risk for suicide attempts than others. For clinicians, important indicators of such a diathesis are a history of a suicide attempt, which indicates the presence of a diathesis for suicidal behavior, and a family history of suicidal behavior. Suicidal behavior is known to be transmitted within families, ",
"title": ""
},
{
"docid": "333b21433d17a9d271868e203c8a9481",
"text": "The aim of stock prediction is to effectively predict future stock market trends (or stock prices), which can lead to increased profit. One major stock analysis method is the use of candlestick charts. However, candlestick chart analysis has usually been based on the utilization of numerical formulas. There has been no work taking advantage of an image processing technique to directly analyze the visual content of the candlestick charts for stock prediction. Therefore, in this study we apply the concept of image retrieval to extract seven different wavelet-based texture features from candlestick charts. Then, similar historical candlestick charts are retrieved based on different texture features related to the query chart, and the “future” stock movements of the retrieved charts are used for stock prediction. To assess the applicability of this approach to stock prediction, two datasets are used, containing 5-year and 10-year training and testing sets, collected from the Dow Jones Industrial Average Index (INDU) for the period between 1990 and 2009. Moreover, two datasets (2010 and 2011) are used to further validate the proposed approach. The experimental results show that visual content extraction and similarity matching of candlestick charts is a new and useful analytical method for stock prediction. More specifically, we found that the extracted feature vectors of 30, 90, and 120, the number of textual features extracted from the candlestick charts in the BMP format, are more suitable for predicting stock movements, while the 90 feature vector offers the best performance for predicting short- and medium-term stock movements. That is, using the 90 feature vector provides the lowest MAPE (3.031%) and Theil’s U (1.988%) rates in the twenty-year dataset, and the best MAPE (2.625%, 2.945%) and Theil’s U (1.622%, 1.972%) rates in the two validation datasets (2010 and 2011).",
"title": ""
},
{
"docid": "9030887c9d95a80ac59e645f19b7e848",
"text": "The notion of a neuron that responds selectively to the image of a particular complex object has been controversial ever since Gross and his colleagues reported neurons in the temporal cortex of monkeys that were selective for the sight of a monkey's hand (Gross, Rocha-Miranda, & Bender, 1972). Since that time, evidence has mounted for neurons in the temporal lobe that respond selectively to faces. The present paper presents a critical analysis of the evidence for face neurons and discusses the implications of these neurons for models of object recognition. The paper also presents some possible reasons for the evolution of face neurons and suggests some analogies with the development of language in humans.",
"title": ""
},
{
"docid": "2afbf85020a40b7e1476d19419e7a2bd",
"text": "Coronary artery disease is the leading global cause of mortality. Long recognized to be heritable, recent advances have started to unravel the genetic architecture of the disease. Common variant association studies have linked approximately 60 genetic loci to coronary risk. Large-scale gene sequencing efforts and functional studies have facilitated a better understanding of causal risk factors, elucidated underlying biology and informed the development of new therapeutics. Moving forwards, genetic testing could enable precision medicine approaches by identifying subgroups of patients at increased risk of coronary artery disease or those with a specific driving pathophysiology in whom a therapeutic or preventive approach would be most useful.",
"title": ""
},
{
"docid": "ff8d55e7b997a9888fafade0366c3ce2",
"text": "OBJECTIVE\nTumors within Meckel's cave are challenging and often require complex approaches. In this report, an expanded endoscopic endonasal approach is reported as a substitute for or complement to other surgical options for the treatment of various tumors within this region.\n\n\nMETHODS\nA database of more than 900 patients who underwent the expanded endoscopic endonasal approach at the University of Pittsburgh Medical Center from 1998 to March of 2008 were reviewed. From these, only patients who had an endoscopic endonasal approach to Meckel's cave were considered. The technique uses the maxillary sinus and the pterygopalatine fossa as part of the working corridor. Infraorbital/V2 and the vidian neurovascular bundles are used as surgical landmarks. The quadrangular space is opened, which is bound by the internal carotid artery medially and inferiorly, V2 laterally, and the abducens nerve superiorly. This offers direct access to the anteroinferomedial segment of Meckel's cave, which can be extended through the petrous bone to reach the cerebellopontine angle.\n\n\nRESULTS\nForty patients underwent an endoscopic endonasal approach to Meckel's cave. The most frequent abnormalities encountered were adenoid cystic carcinoma, meningioma, and schwannomas. Meckel's cave and surrounding structures were accessed adequately in all patients. Five patients developed a new facial numbness in at least 1 segment of the trigeminal nerve, but the deficit was permanent in only 2. Two patients had a transient VIth cranial nerve palsy. Nine patients (30%) showed improvement of preoperative deficits on Cranial Nerves III to VI.\n\n\nCONCLUSION\nIn selected patients, the expanded endoscopic endonasal approach to the quadrangular space provides adequate exposure of Meckel's cave and its vicinity, with low morbidity.",
"title": ""
},
{
"docid": "5213aa65c5a291f0839046607dcf5f6c",
"text": "The distribution and mobility of chromium in the soils and sludge surrounding a tannery waste dumping area was investigated to evaluate its vertical and lateral movement of operational speciation which was determined in six steps to fractionate the material in the soil and sludge into (i) water soluble, (ii) exchangeable, (iii) carbonate bound, (iv) reducible, (v) oxidizable, and (vi) residual phases. The present study shows that about 63.7% of total chromium is mobilisable, and 36.3% of total chromium is nonbioavailable in soil, whereas about 30.2% of total chromium is mobilisable, and 69.8% of total chromium is non-bioavailable in sludge. In contaminated sites the concentration of chromium was found to be higher in the reducible phase in soils (31.3%) and oxidisable phases in sludge (56.3%) which act as the scavenger of chromium in polluted soils. These results also indicate that iron and manganese rich soil can hold chromium that will be bioavailable to plants and biota. Thus, results of this study can indicate the status of bioavailable of chromium in this area, using sequential extraction technique. So a suitable and proper management of handling tannery sludge in the said area will be urgently needed to the surrounding environment as well as ecosystems.",
"title": ""
},
{
"docid": "8a83060c0a454a5f7a13114846bbe9c5",
"text": "Evolutionary Algorithms (EAs) are a fascinating branch of computational intelligence with much potential for use in many application areas. The fundamental principle of EAs is to use ideas inspired by the biological mechanisms observed in nature, such as selection and genetic changes, to find the best solution for a given optimization problem. Generally, EAs use iterative processes, by growing a population of solutions selected in a guided random search and using parallel processing, in order to achieve a desired result. Such population based approaches, for example particle swarm and ant colony optimization (inspired from biology), are among the most popular metaheuristic methods being used in machine learning, along with others such as the simulated annealing (inspired from thermodynamics). In this paper, we provide a short survey on the state-of-the-art of EAs, beginning with some background on the theory of evolution and contrasting the original ideas of Darwin and Lamarck; we then continue with a discussion on the analogy between biological and computational sciences, and briefly describe some fundamentals of EAs, including the Genetic Algorithms, Genetic Programming, Evolution Strategies, Swarm Intelligence Algorithms (i.e., Particle Swarm Optimization, Ant Colony Optimization, Bacteria Foraging Algorithms, Bees Algorithm, Invasive Weed Optimization), Memetic Search, Differential Evolution Search, Artificial Immune Systems, Gravitational Search Algorithm, Intelligent Water Drops Algorithm. We conclude with a short description of the usefulness of EAs for Knowledge Discovery and Data Mining tasks and present some open problems and challenges to further stimulate research.",
"title": ""
},
{
"docid": "55b2465349e4965a35b4c894c5545afb",
"text": "Context-awareness is a key concept in ubiquitous computing. But to avoid developing dedicated context-awareness sub-systems for specific application areas there is a need for more generic programming frameworks. Such frameworks can help the programmer to develop and deploy context-aware applications faster. This paper describes the Java Context-Awareness Framework – JCAF, which is a Java-based context-awareness infrastructure and programming API for creating context-aware computer applications. The paper presents the design principles behind JCAF, its runtime architecture, and its programming API. The paper presents some applications of using JCAF in three different applications and discusses lessons learned from using JCAF.",
"title": ""
},
{
"docid": "75a1832a5fdd9c48f565eb17e8477b4b",
"text": "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.",
"title": ""
},
{
"docid": "87dd019430e4345026b8de22f696c6e2",
"text": "Although consumer research began focusing on emotional response to advertising during the 1980s (Goodstein, Edell, and Chapman Moore. 1990; Burke and Edell, 1989; Aaker, Stayman, and Vezina, 1988; Holbrook and Batra, 1988), perhaps one of the most practical measures of affective response has only recently emerged. Part of the difficulty in developing measures of emotional response stems from the complexity of emotion itself (Plummer and Leckenby, 1985). Researchers have explored several different measurement formats including: verbal self-reports (adjective checklists), physiological techniques, photodecks, and dial-turning instruments.",
"title": ""
},
{
"docid": "5ffe358766049379b0910ac1181100af",
"text": "A novel one-section bandstop filter (BSF), which possesses the characteristics of compact size, wide bandwidth, and low insertion loss is proposed and fabricated. This bandstop filter was constructed by using single quarter-wavelength resonator with one section of anti-coupled lines with short circuits at one end. The attenuation-pole characteristics of this type of bandstop filters are investigated through TEM transmission-line model. Design procedures are clearly presented. The 3-dB bandwidth of the first stopband and insertion loss of the first passband of this BSF is from 2.3 GHz to 9.5 GHz and below 0.3 dB, respectively. There is good agreement between the simulated and experimental results.",
"title": ""
},
{
"docid": "ce2d4247b1072b3c593e73fe9d67cf63",
"text": "OBJECTIVE\nTo improve walking and other aspects of physical function with a progressive 6-month exercise program in patients with multiple sclerosis (MS).\n\n\nMETHODS\nMS patients with mild to moderate disability (Expanded Disability Status Scale scores 1.0 to 5.5) were randomly assigned to an exercise or control group. The intervention consisted of strength and aerobic training initiated during 3-week inpatient rehabilitation and continued for 23 weeks at home. The groups were evaluated at baseline and at 6 months. The primary outcome was walking speed, measured by 7.62 m and 500 m walk tests. Secondary outcomes included lower extremity strength, upper extremity endurance and dexterity, peak oxygen uptake, and static balance. An intention-to-treat analysis was used.\n\n\nRESULTS\nNinety-one (96%) of the 95 patients entering the study completed it. Change between groups was significant in the 7.62 m (p = 0.04) and 500 m walk tests (p = 0.01). In the 7.62 m walk test, 22% of the exercising patients showed clinically meaningful improvements. The exercise group also showed increased upper extremity endurance as compared to controls. No other noteworthy exercise-induced changes were observed. Exercise adherence varied considerably among the exercisers.\n\n\nCONCLUSIONS\nWalking speed improved in this randomized study. The results confirm that exercise is safe for multiple sclerosis patients and should be recommended for those with mild to moderate disability.",
"title": ""
},
{
"docid": "d57533a410ea82ed6355eddf4eb72874",
"text": "The aim of this paper is twofold: (i) to introduce the framework of update semantics and to explain what kind of phenomena may successfully be analysed in it; (ii) to give a detailed analysis of one such phenomenon: default reasoning.",
"title": ""
},
{
"docid": "e46c6e50325d2603a5ae31080f7bfeb5",
"text": "End-to-end learning machines enable a direct mapping from the raw input data to the desired outputs, eliminating the need for handcrafted features. Despite less engineering effort than the hand-crafted counterparts, these learning machines achieve extremely good results for many computer vision and medical image analysis tasks. Two dominant classes of end-to-end learning machines are massive-training artificial neural networks (MTANNs) and convolutional neural networks (CNNs). Although MTANNs have been actively used for a number of medical image analysis tasks over the past two decades, CNNs have recently gained popularity in the field of medical imaging. In this study, we have compared these two successful learning machines both experimentally and theoretically. For that purpose, we considered two well-studied topics in the field of medical image analysis: detection of lung nodules and distinction between benign and malignant lung nodules in computed tomography (CT). For a thorough analysis, we used 2 optimized MTANN architectures and 4 distinct CNN architectures that have different depths. Our experiments demonstrated that the performance of MTANNs was substantially higher than that of CNN when using only limited training data. With a larger training dataset, the performance gap became less evident even though the margin was still significant. Specifically, for nodule detection, MTANNs generated 2.7 false positives per patient at 100% sensitivity, which was significantly (p<.05) lower than the best performing CNN model with 22.7 false positives per patient at the same level of sensitivity. For nodule classification, MTANNs yielded an area under the receiver-operating-characteristic curve (AUC) of 0.8806 (95% CI: 0.8389 to 0.9223), which was significantly (p<.05) greater than the best performing CNN model with an AUC of 0.7755 (95% CI: 0.7120 to 0.8270). Thus, with limited training data, MTANNs would be a suitable end-to-end machine-learning model for detection and classification of focal lesions that do not require high-level semantic features.",
"title": ""
}
] | scidocsrr |
bef53125e1b6c8d51b427714f4886e96 | Learning compound multi-step controllers under unknown dynamics | [
{
"docid": "bbb08c98a2265c53ba590e0872e91e1d",
"text": "Reinforcement learning (RL) is one of the most general approaches to learning control. Its applicability to complex motor systems, however, has been largely impossible so far due to the computational difficulties that reinforcement learning encounters in high dimensional continuous state-action spaces. In this paper, we derive a novel approach to RL for parameterized control policies based on the framework of stochastic optimal control with path integrals. While solidly grounded in optimal control theory and estimation theory, the update equations for learning are surprisingly simple and have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a robot dog illustrates the functionality of our algorithm in a real-world scenario. We believe that our new algorithm, Policy Improvement with Path Integrals (PI2), offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL in robotics.",
"title": ""
},
{
"docid": "e52c40a4fcb6cdb3d9b177e371127185",
"text": "Over the last years, there has been substantial progress in robust manipulation in unstructured environments. The long-term goal of our work is to get away from precise, but very expensive robotic systems and to develop affordable, potentially imprecise, self-adaptive manipulator systems that can interactively perform tasks such as playing with children. In this paper, we demonstrate how a low-cost off-the-shelf robotic system can learn closed-loop policies for a stacking task in only a handful of trials—from scratch. Our manipulator is inaccurate and provides no pose feedback. For learning a controller in the work space of a Kinect-style depth camera, we use a model-based reinforcement learning technique. Our learning method is data efficient, reduces model bias, and deals with several noise sources in a principled way during long-term planning. We present a way of incorporating state-space constraints into the learning process and analyze the learning gain by exploiting the sequential structure of the stacking task.",
"title": ""
},
{
"docid": "ce56d594c7ee2a935b2b8b243d892070",
"text": "We introduce a skill discovery method for reinforcement learning in continuous domains that constructs chains of skills leading to an end-of-task reward. We demonstrate experimentally that it creates appropriate skills and achieves performance benefits in a challenging continuous domain.",
"title": ""
}
] | [
{
"docid": "20b7dfaa400433b6697393d4e265d78d",
"text": "Security Operation Centers (SOCs) are being operated by universities, government agencies, and corporations to defend their enterprise networks in general and in particular to identify malicious behaviors in both networks and hosts. The success of a SOC depends on having the right tools, processes and, most importantly, efficient and effective analysts. One of the worrying issues in recent times has been the consistently high burnout rates of security analysts in SOCs. Burnout results in analysts making poor judgments when analyzing security events as well as frequent personnel turnovers. In spite of high awareness of this problem, little has been known so far about the factors leading to burnout. Various coping strategies employed by SOC management such as career progression do not seem to address the problem but rather deal only with the symptoms. In short, burnout is a manifestation of one or more underlying issues in SOCs that are as of yet unknown. In this work we performed an anthropological study of a corporate SOC over a period of six months and identified concrete factors contributing to the burnout phenomenon. We use Grounded Theory to analyze our fieldwork data and propose a model that explains the burnout phenomenon. Our model indicates that burnout is a human capital management problem resulting from the cyclic interaction of a number of human, technical, and managerial factors. Specifically, we identified multiple vicious cycles connecting the factors affecting the morale of the analysts. In this paper we provide detailed descriptions of the various vicious cycles and suggest ways to turn these cycles into virtuous ones. We further validated our results on the fieldnotes from a SOC at a higher education institution. The proposed model is able to successfully capture and explain the burnout symptoms in this other SOC as well. Copyright is held by the author/owner. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee. Symposium on Usable Privacy and Security (SOUPS) 2015, July 22–24, 2015, Ottawa, Canada.",
"title": ""
},
{
"docid": "624a84c3ebb544385204bb53608cb5a4",
"text": "Electron microscopic visualization of nervous tissue morphology is crucial when aiming to understand the biogenesis and structure of myelin in healthy and pathological conditions. However, accurate interpretation of electron micrographs requires excellent tissue preservation. In this short review we discuss the recent utilization of tissue fixation by high-pressure freezing and freeze-substitution, which now supplements aldehyde fixation in the preparation of samples for electron microscopy of myelin. Cryofixation has proven well suited to yield both, improved contrast and excellent preservation of structural detail of the axon/myelin-unit in healthy and mutant mice and can also be applied to other model organisms, including aquatic species. This article is part of a Special Issue entitled SI: Myelin Evolution.",
"title": ""
},
{
"docid": "995bcaf8f6d3a3475d3e0ad0d492e7cb",
"text": "This paper presents a comprehensive study of demodulation techniques for high-frequency self-oscillating eddy-current displacement sensor (ECDS) interfaces. Increasing the excitation frequency is essential for lowering the skin depth in many demanding industrial applications, that require better resolution. However, a high excitation frequency poses design challenges in the readout electronics, and particularly in the demodulation functional block. We analyze noise, linearity, and stability design considerations in amplitude demodulators for nanometer and sub-nanometer ECDSs. A number of state-of-the-art amplitude demodulation techniques employed in high-frequency ECDSs are reviewed, and their pros and cons are evaluated.",
"title": ""
},
{
"docid": "2e67e715961512e9b17efd82c977abfc",
"text": "In this paper we describe a methodology and an automatic procedure for inferring accurate and easily understandable expert-system-like rules from forensic data. This methodology is based on the fuzzy set theory. The algorithms we used are described in detail, and were tested on forensic data sets. We also present in detail some examples, which are representative for the obtained results.",
"title": ""
},
{
"docid": "457a23b087e59c6076ef6f9da7214fea",
"text": "Supervised learning is widely used in training autonomous driving vehicle. However, it is trained with large amount of supervised labeled data. Reinforcement learning can be trained without abundant labeled data, but we cannot train it in reality because it would involve many unpredictable accidents. Nevertheless, training an agent with good performance in virtual environment is relatively much easier. Because of the huge difference between virtual and real, how to fill the gap between virtual and real is challenging. In this paper, we proposed a novel framework of reinforcement learning with image semantic segmentation network to make the whole model adaptable to reality. The agent is trained in TORCS, a car racing simulator.",
"title": ""
},
{
"docid": "85908a576c13755e792d52d02947f8b3",
"text": "Quick Response Code has been widely used in the automatic identification fields. In order to adapting various sizes, a little dirty or damaged, and various lighting conditions of bar code image, this paper proposes a novel implementation of real-time Quick Response Code recognition using mobile, which is an efficient technology used for data transferring. An image processing system based on mobile is described to be able to binarize, locate, segment, and decode the QR Code. Our experimental results indicate that these algorithms are robust to real world scene image.",
"title": ""
},
{
"docid": "0222a78f29796f0747a11b027b2fe0d8",
"text": "Since last decade, face recognition has replaced almost all biometric authentication techniques available. Many algorithms are in existence today based on various features. In this paper, we have compared the performance of various classifiers like correlation, Artificial Neural Network (ANN) and Support Vector Machine (SVM) for Face Recognition. We have proposed face recognition based on discriminative features. Holistic featuresbased methods Fisher Discriminant Analysis (FDA) usused to extract outdiscriminative features from the input face image respectively. These features are used to train classifiers like Artificial Neural Network (ANN) and Support Vector Machine (SVM). Results in the last section describe the accuracy of proposed scheme. Keywords-Face Recognition, Fisher Discriminant Analysis, Artificial Neural Network, Support Vector Machine.",
"title": ""
},
{
"docid": "65271fcf27d43ef88910e0a872eec0b9",
"text": "Purpose – The purpose of this paper is to investige whether online environment cues (web site quality and web site brand) affect customer purchase intention towards an online retailer and whether this impact is mediated by customer trust and perceived risk. The study also aimed to assess the degree of reciprocity between consumers’ trust and perceived risk in the context of an online shopping environment. Design/methodology/approach – The study proposed a research framework for testing the relationships among the constructs based on the stimulus-organism-response framework. In addition, this study developed a non-recursive model. After the validation of measurement scales, empirical analyses were performed using structural equation modelling. Findings – The findings confirm that web site quality and web site brand affect consumers’ trust and perceived risk, and in turn, consumer purchase intention. Notably, this study finds that the web site brand is a more important cue than web site quality in influencing customers’ purchase intention. Furthermore, the study reveals that the relationship between trust and perceived risk is reciprocal. Research limitations/implications – This study adopted four dimensions – technical adequacy, content quality, specific content and appearance – to measure web site quality. However, there are still many competing concepts regarding the measurement of web site quality. Further studies using other dimensional measures may be needed to verify the research model. Practical implications – Online retailers should focus their marketing strategies more on establishing the brand of the web site rather than improving the functionality of the web site. Originality/value – This study proposed a non-recursive model for empirically analysing the link between web site quality, web site brand, trust, perceived risk and purchase intention towards the online retailer.",
"title": ""
},
{
"docid": "4e85039497c60f8241d598628790f543",
"text": "Knowledge management (KM) is a dominant theme in the behavior of contemporary organizations. While KM has been extensively studied in developed economies, it is much less well understood in developing economies, notably those that are characterized by different social and cultural traditions to the mainstream of Western societies. This is notably the case in China. This chapter develops and tests a theoretical model that explains the impact of leadership style and interpersonal trust on the intention of information and knowledge workers in China to share their knowledge with their peers. All the hypotheses are supported, showing that both initiating structure and consideration have a significant effect on employees’ intention to share knowledge through trust building: 28.2% of the variance in employees’ intention to share knowledge is explained. The authors discuss the theoretical contributions of the chapter, identify future research opportunities, and highlight the implications for practicing managers. DOI: 10.4018/978-1-60566-920-5.ch009",
"title": ""
},
{
"docid": "30aa4e82b5e8a8fb3cc7bea65f389014",
"text": "Numerous studies on the mechanisms of ankle injury deal with injuries to the syndesmosis and anterior ligamentous structures but a previous sectioning study also describes the important role of the posterior talofibular ligament (PTaFL) in the ankle's resistance to external rotation of the foot. It was hypothesized that failure level external rotation of the foot would lead to injury of the PTaFL. Ten ankles were tested by externally rotating the foot until gross injury. Two different frequencies of rotation were used in this study, 0.5 Hz and 2 Hz. The mean failure torque of the ankles was 69.5+/-11.7 Nm with a mean failure angle of 40.7+/-7.3 degrees . No effects of rotation frequency or flexion angle were noted. The most commonly injured structure was the PTaFL. Visible damage to the syndesmosis only occurred in combination with fibular fracture in these experiments. The constraint of the subtalar joint in the current study may have affected the mechanics of the foot and led to the resultant strain in the PTaFL. In the real world, talus rotations may be affected by athletic footwear that may influence the location and potential for an ankle injury under external rotation of the foot.",
"title": ""
},
{
"docid": "b727d8ddbdbaa9b4f5dbdc669bc4a454",
"text": "The linearity of a high-resolution pipelined analog- to-digital converter (ADC) is mainly limited by the capacitor mismatch and the finite operational amplifier (OPAMP) gain in the multiplying-digital-to-analog converter (MDAC). Therefore, high resolution pipelined ADCs usually require high-gain OPAMP and large capacitors, which causes large ADC power. In recent years, various nonlinear calibration techniques have been developed to compensate both linear and nonlinear error from MDCAs so that low-power MDACs with small capacitors and low-gain OPAMP can be used. Hence, the ADC power can be greatly reduced. This paper introduces a novel interpolation- based digital self-calibration architecture for pipelined ADC. Compared to previous techniques, the new architecture is free of adaptation. Hence, long convergence is not needed. The complexity of the digital processor is also considerably lower. The new architecture does not use backend ADC to measure MDACs. Hence, it is free of the accumulation of measurement error, which leads to more accurate calibration. A prototype ADC with the calibration architecture is fabricated in a 0.35 3.3 V CMOS process. The ADC samples at 20 MS/s. The calibration improves the ADC DNL and INL from 1.47 LSB and 7.85 LSB to 0.2 LSB and 0.27 LSB. For a 590 kHz sinusoidal signal, the calibration improves the ADC signal-to-noise-distortion ratio(SNDR) and spurious-free dynamic range (SFDR) from 41.3 dB and 52.1 dB to 72.5 dB and 84.4 dB respectively. The 11.8-ENOB 20 MS/s ADC consumes 56.3 mW power with 3.3 V supply. The 0.78 pJ/step figure-of-merit (FOM) is low for designs in 0.35 CMOS processes. At the Nyquist frequency, SNDR of the calibrated ADC drops 8 dB due to the slow settling of the first pipeline stage.",
"title": ""
},
{
"docid": "d6496dd2c1e8ac47dc12fde28c83a3d4",
"text": "We describe a natural extension of the banker’s algorithm for deadlock avoidance in operating systems. Representing the control flow of each process as a rooted tree of nodes corresponding to resource requests and releases, we propose a quadratic-time algorithm which decomposes each flow graph into a nested family of regions, such that all allocated resources are released before the control leaves a region. Also, information on the maximum resource claims for each of the regions can be extracted prior to process execution. By inserting operating system calls when entering a new region for each process at runtime, and applying the original banker’s algorithm for deadlock avoidance, this method has the potential to achieve better resource utilization because information on the “localized approximate maximum claims” is used for testing system safety.",
"title": ""
},
{
"docid": "b1e88df71ec0dc3bcd80ba395151743f",
"text": "The hair thread is a natural fiber formed by keratin, a protein containing high concentration of sulfur coming from the amino acid cystine. The main physical proprieties of the hair depend mostly on its geometry; the physical and mechanical properties of hair involve characteristics to improve: elasticity, smoothness, volume, shine, and softness due to both the significant adherence of the cuticle scales and the movement control (malleability), as well as the easiness of combing, since they reduce the fibers static electricity. The evaluation of these effects on hair may be carried out by several methods, as: optical and electron microscopy, mechanical resistance measuring, shine evaluation and optical coherence tomography (OCT).",
"title": ""
},
{
"docid": "d2a4213c14a439d231f6be8f54c1dc41",
"text": "Asymmetric multi-core architectures integrating cores with diverse power-performance characteristics is emerging as a promising alternative in the dark silicon era where only a fraction of the cores on chip can be powered on due to thermal limits. We introduce a hierarchical power management framework for asymmetric multi-cores that builds on control theory and coordinates multiple controllers in a synergistic manner to achieve optimal power-performance efficiency while respecting the thermal design power budget. We integrate our framework within Linux and implement/evaluate it on real ARM big.LITTLE asymmetric multi-core platform.",
"title": ""
},
{
"docid": "7170a9d4943db078998e1844ad67ae9e",
"text": "Privacy has become increasingly important to the database community which is reflected by a noteworthy increase in research papers appearing in the literature. While researchers often assume that their definition of “privacy” is universally held by all readers, this is rarely the case; so many papers addressing key challenges in this domain have actually produced results that do not consider the same problem, even when using similar vocabularies. This paper provides an explicit definition of data privacy suitable for ongoing work in data repositories such as a DBMS or for data mining. The work contributes by briefly providing the larger context for the way privacy is defined legally and legislatively but primarily provides a taxonomy capable of thinking of data privacy technologically. We then demonstrate the taxonomy’s utility by illustrating how this perspective makes it possible to understand the important contribution made by researchers to the issue of privacy. The conclusion of this paper is that privacy is indeed multifaceted so no single current research effort adequately addresses the true breadth of the issues necessary to fully understand the scope of this important issue.",
"title": ""
},
{
"docid": "0034b7f8160f504bd3de5125cf33fea6",
"text": "By taking into account simultaneously the effects of border traps and interface states, the authors model the alternating current capacitance-voltage (C-V) behavior of high-mobility substrate metal-oxide-semiconductor (MOS) capacitors. The results are validated with the experimental In0.53Ga0.47As/ high-κ and InP/high-κ (C-V) curves. The simulated C-V and conductance-voltage (G-V) curves reproduce comprehensively the experimentally measured capacitance and conductance data as a function of bias voltage and measurement frequency, over the full bias range going from accumulation to inversion and full frequency spectra from 100 Hz to 1 MHz. The interface state densities of In0.53Ga0.47As and InP MOS devices with various high-κ dielectrics, together with the corresponding border trap density inside the high-κ oxide, were derived accordingly. The derived interface state densities are consistent to those previously obtained with other measurement methods. The border traps, distributed over the thickness of the high- κ oxide, show a large peak density above the two semiconductor conduction band minima. The total density of border traps extracted is on the order of 1019 cm-3. Interface and border trap distributions for InP and In0.53Ga0.47As interfaces with high-κ oxides show remarkable similarities on an energy scale relative to the vacuum reference.",
"title": ""
},
{
"docid": "277919545c003c0c2a266ace0d70de03",
"text": "Two single-pole, double-throw transmit/receive switches were designed and fabricated with different substrate resistances using a 0.18-/spl mu/m p/sup $/substrate CMOS process. The switch with low substrate resistances exhibits 0.8-dB insertion loss and 17-dBm P/sub 1dB/ at 5.825 GHz, whereas the switch with high substrate resistances has 1-dB insertion loss and 18-dBm P/sub 1dB/. These results suggest that the optimal insertion loss can be achieved with low substrate resistances and 5.8-GHz T/R switches with excellent insertion loss and reasonable power handling capability can be implemented in a 0.18-/spl mu/m CMOS process.",
"title": ""
},
{
"docid": "53598a996f31476b32871cf99f6b84f0",
"text": "The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track included three tasks involving: (1A) identifying relationships between citing documents and the referred document, (1B) classifying the discourse facets, and (2) generating the abstractive summary. The dataset comprised 30 annotated sets of citing and reference papers from the open access research papers in the CL domain. This overview paper describes the participation and the official results of the second CL-SciSumm Shared Task, organized as a part of the Joint Workshop onBibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2016), held in New Jersey,USA in June, 2016. The annotated dataset used for this shared task and the scripts used for evaluation can be accessed and used by the community at: https://github.com/WING-NUS/scisumm-corpus.",
"title": ""
}
] | scidocsrr |
0854889eec567aae60cd300f94181f11 | On the Synergy of Network Science and Artificial Intelligence | [
{
"docid": "ba2029c92fc1e9277e38edff0072ac82",
"text": "Estimation, recognition, and near-future prediction of 3D trajectories based on their two dimensional projections available from one camera source is an exceptionally difficult problem due to uncertainty in the trajectories and environment, high dimensionality of the specific trajectory states, lack of enough labeled data and so on. In this article, we propose a solution to solve this problem based on a novel deep learning model dubbed disjunctive factored four-way conditional restricted Boltzmann machine (DFFW-CRBM). Our method improves state-of-the-art deep learning techniques for high dimensional time-series modeling by introducing a novel tensor factorization capable of driving forth order Boltzmann machines to considerably lower energy levels, at no computational costs. DFFW-CRBMs are capable of accurately estimating, recognizing, and performing near-future prediction of three-dimensional trajectories from their 2D projections while requiring limited amount of labeled data. We evaluate our method on both simulated and real-world data, showing its effectiveness in predicting and classifying complex ball trajectories and human activities.",
"title": ""
}
] | [
{
"docid": "13d9b338b83a5fcf75f74607bf7428a7",
"text": "We extend the neural Turing machine (NTM) model into a dynamic neural Turing machine (D-NTM) by introducing trainable address vectors. This addressing scheme maintains for each memory cell two separate vectors, content and address vectors. This allows the D-NTM to learn a wide variety of location-based addressing strategies, including both linear and nonlinear ones. We implement the D-NTM with both continuous and discrete read and write mechanisms. We investigate the mechanisms and effects of learning to read and write into a memory through experiments on Facebook bAbI tasks using both a feedforward and GRU controller. We provide extensive analysis of our model and compare different variations of neural Turing machines on this task. We show that our model outperforms long short-term memory and NTM variants. We provide further experimental results on the sequential MNIST, Stanford Natural Language Inference, associative recall, and copy tasks.",
"title": ""
},
{
"docid": "2a914d703108f165aecbb7ad1a2dde2c",
"text": "The general objective of our work is to investigate the area and power-delay performances of low-voltage full adder cells in different CMOS logic styles for the predominating tree structured arithmetic circuits. A new hybrid style full adder circuit is also presented. The sum and carry generation circuits of the proposed full adder are designed with hybrid logic styles. To operate at ultra-low supply voltage, the pass logic circuit that cogenerates the intermediate XOR and XNOR outputs has been improved to overcome the switching delay problem. As full adders are frequently employed in a tree structured configuration for high-performance arithmetic circuits, a cascaded simulation structure is introduced to evaluate the full adders in a realistic application environment. A systematic and elegant procedure to scale the transistor for minimal power-delay product is proposed. The circuits being studied are optimized for energy efficiency at 0.18-/spl mu/m CMOS process technology. With the proposed simulation environment, it is shown that some survival cells in stand alone operation at low voltage may fail when cascaded in a larger circuit, either due to the lack of drivability or unsatisfactory speed of operation. The proposed hybrid full adder exhibits not only the full swing logic and balanced outputs but also strong output drivability. The increase in the transistor count of its complementary CMOS output stage is compensated by its area efficient layout. Therefore, it remains one of the best contenders for designing large tree structured arithmetic circuits with reduced energy consumption while keeping the increase in area to a minimum.",
"title": ""
},
{
"docid": "2f44362f2c294580240d99a8cc402d1f",
"text": "Research problem: The study explored think-aloud methods usage within usability testing by examining the following questions: How, and why is the think-aloud method used? What is the gap between theory and practice? Where does this gap occur? Literature review: The review informed the survey design. Usability research based on field studies and empirical tests indicates that variations in think-aloud procedures may reduce test reliability. The guidance offered on think-aloud procedures within a number of handbooks on usability testing is also mixed. This indicates potential variability in practice, but how much and for what reasons is unknown. Methodology: An exploratory, qualitative survey was conducted using a web-based questionnaire (during November-December 2010). Usability evaluators were sought via emails (sent to personal contacts, usability companies, conference attendees, and special interest groups) to be cascaded to the international community. As a result we received 207 full responses. Descriptive statistics and thematic coding were used to analyze the data sets. Results: Respondents found the concurrent technique particularly suited usability testing as it was fast, easy for users to relate to, and requires limited resources. Divergent practice was reported in terms of think-aloud instructions, practice, interventions, and the use of demonstrations. A range of interventions was used to better understand participant actions and verbalizations, however, respondents were aware of potential threats to test reliability, and took steps to reduce this impact. Implications: The reliability considerations underpinning the classic think-aloud approach are pragmatically balanced against the need to capture useful data in the time available. A limitation of the study is the focus on the concurrent method; other methods were explored but the differences in application were not considered. Future work is needed to explore the impact of divergent use of think-aloud instructions, practice tasks, and the use of demonstrations on test reliability.",
"title": ""
},
{
"docid": "b8fa50df3c76c2192c67cda7ae4d05f5",
"text": "Task parallelism has increasingly become a trend with programming models such as OpenMP 3.0, Cilk, Java Concurrency, X10, Chapel and Habanero-Java (HJ) to address the requirements of multicore programmers. While task parallelism increases productivity by allowing the programmer to express multiple levels of parallelism, it can also lead to performance degradation due to increased overheads. In this article, we introduce a transformation framework for optimizing task-parallel programs with a focus on task creation and task termination operations. These operations can appear explicitly in constructs such as async, finish in X10 and HJ, task, taskwait in OpenMP 3.0, and spawn, sync in Cilk, or implicitly in composite code statements such as foreach and ateach loops in X10, forall and foreach loops in HJ, and parallel loop in OpenMP.\n Our framework includes a definition of data dependence in task-parallel programs, a happens-before analysis algorithm, and a range of program transformations for optimizing task parallelism. Broadly, our transformations cover three different but interrelated optimizations: (1) finish-elimination, (2) forall-coarsening, and (3) loop-chunking. Finish-elimination removes redundant task termination operations, forall-coarsening replaces expensive task creation and termination operations with more efficient synchronization operations, and loop-chunking extracts useful parallelism from ideal parallelism. All three optimizations are specified in an iterative transformation framework that applies a sequence of relevant transformations until a fixed point is reached. Further, we discuss the impact of exception semantics on the specified transformations, and extend them to handle task-parallel programs with precise exception semantics. Experimental results were obtained for a collection of task-parallel benchmarks on three multicore platforms: a dual-socket 128-thread (16-core) Niagara T2 system, a quad-socket 16-core Intel Xeon SMP, and a quad-socket 32-core Power7 SMP. We have observed that the proposed optimizations interact with each other in a synergistic way, and result in an overall geometric average performance improvement between 6.28× and 10.30×, measured across all three platforms for the benchmarks studied.",
"title": ""
},
{
"docid": "42eca5d49ef3e27c76b65f8feccd8499",
"text": "Convolutional Neural Networks (CNNs) have shown to yield very strong results in several Computer Vision tasks. Their application to language has received much less attention, and it has mainly focused on static classification tasks, such as sentence classification for Sentiment Analysis or relation extraction. In this work, we study the application of CNNs to language modeling, a dynamic, sequential prediction task that needs models to capture local as well as long-range dependency information. Our contribution is twofold. First, we show that CNNs achieve 11-26% better absolute performance than feed-forward neural language models, demonstrating their potential for language representation even in sequential tasks. As for recurrent models, our model outperforms RNNs but is below state of the art LSTM models. Second, we gain some understanding of the behavior of the model, showing that CNNs in language act as feature detectors at a high level of abstraction, like in Computer Vision, and that the model can profitably use information from as far as 16 words before the target.",
"title": ""
},
{
"docid": "5c4f313482543223306be014cff0cc2e",
"text": "Transformer inrush currents are high-magnitude, harmonic rich currents generated when transformer cores are driven into saturation during energization. These currents have undesirable effects, including potential damage or loss-of-life of transformer, protective relay miss operation, and reduced power quality on the system. This paper explores the theoretical explanations of inrush currents and explores different factors that have influences on the shape and magnitude of those inrush currents. PSCAD/EMTDC is used to investigate inrush currents phenomena by modeling a practical power system circuit for single phase transformer",
"title": ""
},
{
"docid": "54d54094acea1900e183144d32b1910f",
"text": "A large body of work has been devoted to address corporate-scale privacy concerns related to social networks. Most of this work focuses on how to share social networks owned by organizations without revealing the identities or the sensitive relationships of the users involved. Not much attention has been given to the privacy risk of users posed by their daily information-sharing activities.\n In this article, we approach the privacy issues raised in online social networks from the individual users’ viewpoint: we propose a framework to compute the privacy score of a user. This score indicates the user’s potential risk caused by his or her participation in the network. Our definition of privacy score satisfies the following intuitive properties: the more sensitive information a user discloses, the higher his or her privacy risk. Also, the more visible the disclosed information becomes in the network, the higher the privacy risk. We develop mathematical models to estimate both sensitivity and visibility of the information. We apply our methods to synthetic and real-world data and demonstrate their efficacy and practical utility.",
"title": ""
},
{
"docid": "35225f6ca92daf5b17bdd2a5395b83ca",
"text": "A neural network with a single layer of hidden units of gaussian type is proved to be a universal approximator for real-valued maps defined on convex, compact sets of Rn.",
"title": ""
},
{
"docid": "eb836852ea301e07dcc6c022f89fd8a8",
"text": "This paper proposes a practical circuit-based model for Li-ion cells, which can be directly connected to a model of a complete electric vehicle (EV) system. The goal of this paper is to provide EV system designers with a tool in simulation programs such as Matlab/Simulink to model the behaviour of Li-ion cells under various operating conditions in EV or other applications. The current direction, state of charge (SoC), temperature and C-rate dependency are represented by empirical equations obtained from measurements on LiFePO4 cells. Tradeoffs between model complexity and accuracy have been made based on practical considerations in EV applications. Depending on the required accuracy and operating conditions, the EV system designer can choose the influences to be included in the system simulation.",
"title": ""
},
{
"docid": "a4922f728f50fa06a63b826ed84c9f24",
"text": "Simulations are attractive environments for training agents as they provide an abundant source of data and alleviate certain safety concerns during the training process. But the behaviours developed by agents in simulation are often specific to the characteristics of the simulator. Due to modeling error, strategies that are successful in simulation may not transfer to their real world counterparts. In this paper, we demonstrate a simple method to bridge this “reality gap”. By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained. This adaptivity enables the policies to generalize to the dynamics of the real world without any training on the physical system. Our approach is demonstrated on an object pushing task using a robotic arm. Despite being trained exclusively in simulation, our policies are able to maintain a similar level of performance when deployed on a real robot, reliably moving an object to a desired location from random initial configurations. We explore the impact of various design decisions and show that the resulting policies are robust to significant calibration error.",
"title": ""
},
{
"docid": "5bef975924d427c3ae186d92a93d4f74",
"text": "The Voronoi diagram of a set of sites partitions space into regions, one per site; the region for a site s consists of all points closer to s than to any other site. The dual of the Voronoi diagram, the Delaunay triangulation, is the unique triangulation such that the circumsphere of every simplex contains no sites in its interior. Voronoi diagrams and Delaunay triangulations have been rediscovered or applied in many areas of mathematics and the natural sciences; they are central topics in computational geometry, with hundreds of papers discussing algorithms and extensions. Section 27.1 discusses the definition and basic properties in the usual case of point sites in R with the Euclidean metric, while Section 27.2 gives basic algorithms. Some of the many extensions obtained by varying metric, sites, environment, and constraints are discussed in Section 27.3. Section 27.4 finishes with some interesting and nonobvious structural properties of Voronoi diagrams and Delaunay triangulations.",
"title": ""
},
{
"docid": "e78d82c45dcb5297244f98ef0d26c10e",
"text": "The current study examines changes over time in a commonly used measure of dispositional empathy. A cross-temporal meta-analysis was conducted on 72 samples of American college students who completed at least one of the four subscales (Empathic Concern, Perspective Taking, Fantasy, and Personal Distress) of the Interpersonal Reactivity Index (IRI) between 1979 and 2009 (total N = 13,737). Overall, the authors found changes in the most prototypically empathic subscales of the IRI: Empathic Concern was most sharply dropping, followed by Perspective Taking. The IRI Fantasy and Personal Distress subscales exhibited no changes over time. Additional analyses found that the declines in Perspective Taking and Empathic Concern are relatively recent phenomena and are most pronounced in samples from after 2000.",
"title": ""
},
{
"docid": "c75b7ad0faf841b7ec4ae7f91d236259",
"text": "People have been shown to project lifelike attributes onto robots and to display behavior indicative of empathy in human-robot interaction. Our work explores the role of empathy by examining how humans respond to a simple robotic object when asked to strike it. We measure the effects of lifelike movement and stories on people's hesitation to strike the robot, and we evaluate the relationship between hesitation and people's trait empathy. Our results show that people with a certain type of high trait empathy (empathic concern) hesitate to strike the robots. We also find that high empathic concern and hesitation are more strongly related for robots with stories. This suggests that high trait empathy increases people's hesitation to strike a robot, and that stories may positively influence their empathic responses.",
"title": ""
},
{
"docid": "fd0c32b1b4e52f397d0adee5de7e381c",
"text": "Context. Electroencephalography (EEG) is a complex signal and can require several years of training, as well as advanced signal processing and feature extraction methodologies to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. Objective. In this work, we review 156 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, braincomputer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches from this large body of literature in order to inform future research and formulate recommendations. Methods. Major databases spanning the fields of science and engineering were queried to identify relevant studies published in scientific journals, conferences, and electronic preprint repositories. Various data items were extracted for each study pertaining to 1) the data, 2) the preprocessing methodology, 3) the DL design choices, 4) the results, and 5) the reproducibility of the experiments. These items were then analyzed one by one to uncover trends. Results. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours, while the number of samples seen during training by a network varies from a few dozens to several millions, depending on how epochs are extracted. Interestingly, we saw that more than half the studies used publicly available data and that there has also been a clear shift from intra-subject to inter-subject approaches over the last few years. About 40% of the studies used convolutional neural networks (CNNs), while 14% used recurrent neural networks (RNNs), most often with a total of 3 to 10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was 5.4% across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. ∗The first two authors contributed equally to this work. Significance. To help the community progress and share work more effectively, we provide a list of recommendations for future studies. We also make our summary table of DL and EEG papers available and invite authors of published work to contribute to it directly.",
"title": ""
},
{
"docid": "3910a3317ea9ff4ea6c621e562b1accc",
"text": "Compaction of agricultural soils is a concern for many agricultural soil scientists and farmers since soil compaction, due to heavy field traffic, has resulted in yield reduction of most agronomic crops throughout the world. Soil compaction is a physical form of soil degradation that alters soil structure, limits water and air infiltration, and reduces root penetration in the soil. Consequences of soil compaction are still underestimated. A complete understanding of processes involved in soil compaction is necessary to meet the future global challenge of food security. We review here the advances in understanding, quantification, and prediction of the effects of soil compaction. We found the following major points: (1) When a soil is exposed to a vehicular traffic load, soil water contents, soil texture and structure, and soil organic matter are the three main factors which determine the degree of compactness in that soil. (2) Soil compaction has direct effects on soil physical properties such as bulk density, strength, and porosity; therefore, these parameters can be used to quantify the soil compactness. (3) Modified soil physical properties due to soil compaction can alter elements mobility and change nitrogen and carbon cycles in favour of more emissions of greenhouse gases under wet conditions. (4) Severe soil compaction induces root deformation, stunted shoot growth, late germination, low germination rate, and high mortality rate. (5) Soil compaction decreases soil biodiversity by decreasing microbial biomass, enzymatic activity, soil fauna, and ground flora. (6) Boussinesq equations and finite element method models, that predict the effects of the soil compaction, are restricted to elastic domain and do not consider existence of preferential paths of stress propagation and localization of deformation in compacted soils. (7) Recent advances in physics of granular media and soil mechanics relevant to soil compaction should be used to progress in modelling soil compaction.",
"title": ""
},
{
"docid": "5116079b69aeb1858177429fabd10f80",
"text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.",
"title": ""
},
{
"docid": "05cd3cd38b699c0dea7fd2ba771ed770",
"text": "Background: Electric vehicles have been identified as being a key technology in reducing future emissions and energy consumption in the mobility sector. The focus of this article is to review and assess the energy efficiency and the environmental impact of battery electric cars (BEV), which is the only technical alternative on the market available today to vehicles with internal combustion engine (ICEV). Electricity onboard a car can be provided either by a battery or a fuel cell (FCV). The technical structure of BEV is described, clarifying that it is relatively simple compared to ICEV. Following that, ICEV can be ‘e-converted’ by experienced personnel. Such an e-conversion project generated reality-close data reported here. Results: Practicability of today's BEV is discussed, revealing that particularly small-size BEVs are useful. This article reports on an e-conversion of a used Smart. Measurements on this car, prior and after conversion, confirmed a fourfold energy efficiency advantage of BEV over ICEV, as supposed in literature. Preliminary energy efficiency data of FCV are reviewed being only slightly lower compared to BEV. However, well-to-wheel efficiency suffers from 47% to 63% energy loss during hydrogen production. With respect to energy efficiency, BEVs are found to represent the only alternative to ICEV. This, however, is only true if the electricity is provided by very efficient power plants or better by renewable energy production. Literature data on energy consumption and greenhouse gas (GHG) emission by ICEV compared to BEV suffer from a 25% underestimation of ICEV-standardized driving cycle numbers in relation to street conditions so far. Literature data available for BEV, on the other hand, were mostly modeled and based on relatively heavy BEV as well as driving conditions, which do not represent the most useful field of BEV operation. Literature data have been compared with measurements based on the converted Smart, revealing a distinct GHG emissions advantage due to the German electricity net conditions, which can be considerably extended by charging electricity from renewable sources. Life cycle carbon footprint of BEV is reviewed based on literature data with emphasis on lithium-ion batteries. Battery life cycle assessment (LCA) data available in literature, so far, vary significantly by a factor of up to 5.6 depending on LCA methodology approach, but also with respect to the battery chemistry. Carbon footprint over 100,000 km calculated for the converted 10-year-old Smart exhibits a possible reduction of over 80% in comparison to the Smart with internal combustion engine. Conclusion: Findings of the article confirm that the electric car can serve as a suitable instrument towards a much more sustainable future in mobility. This is particularly true for small-size BEV, which is underrepresented in LCA literature data so far. While CO2-LCA of BEV seems to be relatively well known apart from the battery, life cycle impact of BEV in categories other than the global warming potential reveals a complex and still incomplete picture. Since technology of the electric car is of limited complexity with the exception of the battery, used cars can also be converted from combustion to electric. This way, it seems possible to reduce CO2-equivalent emissions by 80% (factor 5 efficiency improvement). * Correspondence: e.helmers@umwelt-campus.de Institut für angewandtes Stoffstrommanagement (IfaS) am Umwelt-Campus Birkenfeld, Trier University of Applied Sciences, P.O. Box 1380 Birkenfeld, D-55761, Germany © 2012 Helmers and Marx; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Helmers and Marx Environmental Sciences Europe 2012, 24:14 Page 2 of 15 http://www.enveurope.com/content/24/1/14",
"title": ""
},
{
"docid": "7efc7056f11b61eb9c0d35c57e81a7f7",
"text": "Action Language is a specification language for reactive sof tware systems. In this paper, we present the syntax and the semantics of the Action Language and we also present an in finite-state symbolic model checker called Action Language Verifier (ALV) that verifies (or falsifies) CTL properti es of Action Language specifications. ALV is built on top of the Composite Symbolic Library, which is a symbolic manip ulator that combines multiple symbolic representations. ALV is a polymorphic model checker that can use different com binations of the symbolic representations implemented in the Composite Symbolic Library. We describe the heuristi cs implemented in ALV for computing fixpoints using the composite symbolic representation. Since Action Langu ge specifications allow declaration of unbounded integer variables and parameterized integer constants, verificati on of Action Language specifications is undecidable. ALV uses several heuristics to conservatively approximate the fixpoint computations. ALV also implements an automated abstraction technique that enables parameterized verifica tion of a concurrent system with an arbitrary number of identical processes.",
"title": ""
},
{
"docid": "54b4726650b3afcddafb120ff99c9951",
"text": "Online harassment has been a problem to a greater or lesser extent since the early days of the internet. Previous work has applied anti-spam techniques like machine-learning based text classification (Reynolds, 2011) to detecting harassing messages. However, existing public datasets are limited in size, with labels of varying quality. The #HackHarassment initiative (an alliance of 1 tech companies and NGOs devoted to fighting bullying on the internet) has begun to address this issue by creating a new dataset superior to its predecssors in terms of both size and quality. As we (#HackHarassment) complete further rounds of labelling, later iterations of this dataset will increase the available samples by at least an order of magnitude, enabling corresponding improvements in the quality of machine learning models for harassment detection. In this paper, we introduce the first models built on the #HackHarassment dataset v1.0 (a new open dataset, which we are delighted to share with any interested researcherss) as a benchmark for future research.",
"title": ""
},
{
"docid": "7a417c3fe0a93656f5628463d9c425e7",
"text": "Given a finite range space Σ = (X, R), with N = |X| + |R|, we present two simple algorithms, based on the multiplicative-weight method, for computing a small-size hitting set or set cover of Σ. The first algorithm is a simpler variant of the Brönnimann-Goodrich algorithm but more efficient to implement, and the second algorithm can be viewed as solving a two-player zero-sum game. These algorithms, in conjunction with some standard geometric data structures, lead to near-linear algorithms for computing a small-size hitting set or set cover for a number of geometric range spaces. For example, they lead to O(N polylog(N)) expected-time randomized O(1)-approximation algorithms for both hitting set and set cover if X is a set of points and ℜ a set of disks in R2.",
"title": ""
}
] | scidocsrr |
f67544bde50fcb5a22cea405184aaa65 | Overview of the improvement of the ring-stage survival assay-a novel phenotypic assay for the detection of artemisinin-resistant Plasmodium falciparum | [
{
"docid": "5995a2775a6a10cf4f2bd74a2959935d",
"text": "Artemisinin-based combination therapy is recommended to treat Plasmodium falciparum worldwide, but observations of longer artemisinin (ART) parasite clearance times (PCTs) in Southeast Asia are widely interpreted as a sign of potential ART resistance. In search of an in vitro correlate of in vivo PCT after ART treatment, a ring-stage survival assay (RSA) of 0–3 h parasites was developed and linked to polymorphisms in the Kelch propeller protein (K13). However, RSA remains a laborious process, involving heparin, Percoll gradient, and sorbitol treatments to obtain rings in the 0–3 h window. Here two alternative RSA protocols are presented and compared to the standard Percoll-based method, one highly stage-specific and one streamlined for laboratory application. For all protocols, P. falciparum cultures were synchronized with 5 % sorbitol treatment twice over two intra-erythrocytic cycles. For a filtration-based RSA, late-stage schizonts were passed through a 1.2 μm filter to isolate merozoites, which were incubated with uninfected erythrocytes for 45 min. The erythrocytes were then washed to remove lysis products and further incubated until 3 h post-filtration. Parasites were pulsed with either 0.1 % dimethyl sulfoxide (DMSO) or 700 nM dihydroartemisinin in 0.1 % DMSO for 6 h, washed twice in drug-free media, and incubated for 66–90 h, when survival was assessed by microscopy. For a sorbitol-only RSA, synchronized young (0–3 h) rings were treated with 5 % sorbitol once more prior to the assay and adjusted to 1 % parasitaemia. The drug pulse, incubation, and survival assessment were as described above. Ring-stage survival of P. falciparum parasites containing either the K13 C580 or C580Y polymorphism (associated with low and high RSA survival, respectively) were assessed by the described filtration and sorbitol-only methods and produced comparable results to the reported Percoll gradient RSA. Advantages of both new methods include: fewer reagents, decreased time investment, and fewer procedural steps, with enhanced stage-specificity conferred by the filtration method. Assessing P. falciparum ART sensitivity in vitro via RSA can be streamlined and accurately evaluated in the laboratory by filtration or sorbitol synchronization methods, thus increasing the accessibility of the assay to research groups.",
"title": ""
}
] | [
{
"docid": "b9538c45fc55caff8b423f6ecc1fe416",
"text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.",
"title": ""
},
{
"docid": "3e4a2d4564e9904b3d3b0457860da5cf",
"text": "Model-based, torque-level control can offer precision and speed advantages over velocity-level or position-level robot control. However, the dynamic parameters of the robot must be identified accurately. Several steps are involved in dynamic parameter identification, including modeling the system dynamics, joint position/torque data acquisition and filtering, experimental design, dynamic parameters estimation and validation. In this paper, we propose a novel, computationally efficient and intuitive optimality criterion to design the excitation trajectory for the robot to follow. Experiments are carried out for a 6 degree of freedom (DOF) Staubli TX-90 robot. We validate the dynamics parameters using torque prediction accuracy and compare to existing methods. The RMS errors of the prediction were small, and the computation time for the new, optimal objective function is an order of magnitude less than for existing approaches. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6b1e67c1768f9ec7a6ab95a9369b92d1",
"text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.",
"title": ""
},
{
"docid": "60999276e84cbd46d778c62439014598",
"text": "Graph comprehension is constrained by the goals of the cognitive system that processes the graph and by the context in which the graph appears. In this paper we report the results of a study using a sentence-graph verification paradigm. We recorded participants’ reaction times to indicate whether the information contained in a simple bar graph matched a written description of the graph. Aside from the consistency of visual and verbal information, we manipulated whether the graph was ascending or descending, the relational term in the verbal description, and the labels of the bars of the graph. Our results showed that the biggest source of variance in people’s reaction times is whether the order in which the referents appear in the graph is the same as the order in which they appear in the sentence. The implications of this finding for contemporary theories of graph comprehension are discussed.",
"title": ""
},
{
"docid": "7ad4c2f0b66a11891bd19d175becf5c2",
"text": "The presence of noise represent a relevant issue in image feature extraction and classification. In deep learning, representation is learned directly from the data and, therefore, the classification model is influenced by the quality of the input. However, the ability of deep convolutional neural networks to deal with images that have a different quality when compare to those used to train the network is still to be fully understood. In this paper, we evaluate the generalization of models learned by different networks using noisy images. Our results show that noise cause the classification problem to become harder. However, when image quality is prone to variations after deployment, it might be advantageous to employ models learned using noisy data.",
"title": ""
},
{
"docid": "672c11254309961fe02bc48827f8949e",
"text": "HIV-1 integration into the host genome favors actively transcribed genes. Prior work indicated that the nuclear periphery provides the architectural basis for integration site selection, with viral capsid-binding host cofactor CPSF6 and viral integrase-binding cofactor LEDGF/p75 contributing to selection of individual sites. Here, by investigating the early phase of infection, we determine that HIV-1 traffics throughout the nucleus for integration. CPSF6-capsid interactions allow the virus to bypass peripheral heterochromatin and penetrate the nuclear structure for integration. Loss of interaction with CPSF6 dramatically alters virus localization toward the nuclear periphery and integration into transcriptionally repressed lamina-associated heterochromatin, while loss of LEDGF/p75 does not significantly affect intranuclear HIV-1 localization. Thus, CPSF6 serves as a master regulator of HIV-1 intranuclear localization by trafficking viral preintegration complexes away from heterochromatin at the periphery toward gene-dense chromosomal regions within the nuclear interior.",
"title": ""
},
{
"docid": "f262c85e241e0c6dd6eb472841284345",
"text": "BACKGROUND\nWe evaluated the feasibility and tolerability of triple- versus double-drug chemotherapy in elderly patients with oesophagogastric cancer.\n\n\nMETHODS\nPatients aged 65 years or older with locally advanced or metastatic oesophagogastric cancer were stratified and randomised to infusional 5-FU, leucovorin and oxaliplatin without (FLO) or with docetaxel 50 mg/m(2) (FLOT) every 2 weeks. The study is registered at ClinicalTrials.gov, identifier NCT00737373.\n\n\nFINDINGS\nOne hundred and forty three (FLO, 71; FLOT, 72) patients with a median age of 70 years were enrolled. The triple combination was associated with more treatment-related National Cancer Institute Common Toxicity Criteria (NCI-CTC) grade 3/4 adverse events (FLOT, 81.9%; FLO, 38.6%; P<.001) and more patients experiencing a ≥10-points deterioration of European Organization for Research and Treatment of Cancer Quality of Life (EORTC QoL) global health status scores (FLOT, 47.5%; FLO 20.5%; p=.011). The triple combination was associated with more alopecia (P<.001), neutropenia (P<.001), leukopenia (P<.001), diarrhoea (P=.006) and nausea (P=.029).). No differences were observed in treatment duration and discontinuation due to toxicity, cumulative doses or toxic deaths between arms. The triple combination improved response rates and progression-free survival in the locally advanced subgroup and in the subgroup of patients aged between 65 and 70 years but not in the metastatic group or in patients aged 70 years and older.\n\n\nINTERPRETATION\nThe triple-drug chemotherapy was feasible in elderly patients with oesophagogastric cancer. However, toxicity was significantly increased and QoL deteriorated in a relevant proportion of patients.\n\n\nFUNDING\nThe study was partially funded by Sanofi-Aventis.",
"title": ""
},
{
"docid": "a90fe1117e587d5b48a056278f48b01d",
"text": "The concept of a medical parallel robot applicable to chest compression in the process of cardiopulmonary resuscitation (CPR) is proposed in this paper. According to the requirement of CPR action, a three-prismatic-universal-universal (3-PUU) translational parallel manipulator (TPM) is designed and developed for such applications, and a detailed analysis has been performed for the 3-PUU TPM involving the issues of kinematics, dynamics, and control. In view of the physical constraints imposed by mechanical joints, both the robot-reachable workspace and the maximum inscribed cylinder-usable workspace are determined. Moreover, the singularity analysis is carried out via the screw theory, and the robot architecture is optimized to obtain a large well-conditioning usable workspace. Based on the principle of virtual work with a simplifying hypothesis adopted, the dynamic model is established, and dynamic control utilizing computed torque method is implemented. At last, the experimental results made for the prototype illustrate the performance of the control algorithm well. This research will lay a good foundation for the development of a medical robot to assist in CPR operation.",
"title": ""
},
{
"docid": "75cb5c4c9c122d6e80419a3ceb99fd67",
"text": "Indonesian clove cigarettes (kreteks), typically have the appearance of a conventional domestic cigarette. The unique aspects of kreteks are that in addition to tobacco they contain dried clove buds (15-40%, by wt.), and are flavored with a proprietary \"sauce\". Whereas the clove buds contribute to generating high levels of eugenol in the smoke, the \"sauce\" may also contribute other potentially harmful constituents in addition to those associated with tobacco use. We measured levels of eugenol, trans-anethole (anethole), and coumarin in smoke from 33 brands of clove-flavored cigarettes (filtered and unfiltered) from five kretek manufacturers. In order to provide information for evaluating the delivery of these compounds under standard smoking conditions, a quantification method was developed for their measurement in mainstream cigarette smoke. The method allowed collection of mainstream cigarette smoke particulate matter on a Cambridge filter pad, extraction with methanol, sampling by automated headspace solid-phase microextraction, and subsequent analysis using gas chromatography/mass spectrometry. The presence of these compounds was confirmed in the smoke of kreteks using mass spectral library matching, high-resolution mass spectrometry (+/-0.0002 amu), and agreement with a relative retention time index, and native standards. We found that when kreteks were smoked according to standardized machine smoke parameters as specified by the International Standards Organization, all 33 clove brands contained levels of eugenol ranging from 2,490 to 37,900 microg/cigarette (microg/cig). Anethole was detected in smoke from 13 brands at levels of 22.8-1,030 microg/cig, and coumarin was detected in 19 brands at levels ranging from 9.2 to 215 microg/cig. These detected levels are significantly higher than the levels found in commercial cigarette brands available in the United States.",
"title": ""
},
{
"docid": "2a8c3676233cf1ae61fe91a7af3873d9",
"text": "Rumination has attracted increasing theoretical and empirical interest in the past 15 years. Previous research has demonstrated significant relationships between rumination, depression, and metacognition. Two studies were conducted to further investigate these relationships and test the fit of a clinical metacognitive model of rumination and depression in samples of both depressed and nondepressed participants. In these studies, we collected cross-sectional data of rumination, depression, and metacognition. The relationships among variables were examined by testing the fit of structural equation models. In the study on depressed participants, a good model fit was obtained consistent with predictions. There were similarities and differences between the depressed and nondepressed samples in terms of relationships among metacognition, rumination, and depression. In each case, theoretically consistent paths between positive metacognitive beliefs, rumination, negative metacognitive beliefs, and depression were evident. The conceptual and clinical implications of these data are discussed.",
"title": ""
},
{
"docid": "77be4363f9080eb8a3b73c9237becca4",
"text": "Aim: The purpose of this paper is to present findings of an integrative literature review related to employees’ motivational practices in organizations. Method: A broad search of computerized databases focusing on articles published in English during 1999– 2010 was completed. Extensive screening sought to determine current literature themes and empirical research evidence completed in employees’ focused specifically on motivation in organization. Results: 40 articles are included in this integrative literature review. The literature focuses on how job characteristics, employee characteristic, management practices and broader environmental factors influence employees’ motivation. Research that links employee’s motivation is both based on qualitative and quantitative studies. Conclusion: This literature reveals widespread support of motivation concepts in organizations. Theoretical and editorial literature confirms motivation concepts are central to employees. Job characteristics, management practices, employee characteristics and broader environmental factors are the key variables influence employees’ motivation in organization.",
"title": ""
},
{
"docid": "89875f4c0d70e655dd1ff9ffef7c04c2",
"text": "Flexible electronics incorporate all the functional attributes of conventional rigid electronics in formats that have been altered to survive mechanical deformations. Understanding the evolution of device performance during bending, stretching, or other mechanical cycling is, therefore, fundamental to research efforts in this area. Here, we review the various classes of flexible electronic devices (including power sources, sensors, circuits and individual components) and describe the basic principles of device mechanics. We then review techniques to characterize the deformation tolerance and durability of these flexible devices, and we catalogue and geometric designs that are intended to optimize electronic systems for maximum flexibility.",
"title": ""
},
{
"docid": "87b7b05c6af2fddb00f7b1d3a60413c1",
"text": "Mobile crowdsensing (MCS) is a human-driven Internet of Things service empowering citizens to observe the phenomena of individual, community, or even societal value by sharing sensor data about their environment while on the move. Typical MCS service implementations utilize cloud-based centralized architectures, which consume a lot of computational resources and generate significant network traffic, both in mobile networks and toward cloud-based MCS services. Mobile edge computing (MEC) is a natural choice to distribute MCS solutions by moving computation to network edge, since an MEC-based architecture enables significant performance improvements due to the partitioning of problem space based on location, where real-time data processing and aggregation is performed close to data sources. This in turn reduces the associated traffic in mobile core and will facilitate MCS deployments of massive scale. This paper proposes an edge computing architecture adequate for massive scale MCS services by placing key MCS features within the reference MEC architecture. In addition to improved performance, the proposed architecture decreases privacy threats and permits citizens to control the flow of contributed sensor data. It is adequate for both data analytics and real-time MCS scenarios, in line with the 5G vision to integrate a huge number of devices and enable innovative applications requiring low network latency. Our analysis of service overhead introduced by distributed architecture and service reconfiguration at network edge performed on real user traces shows that this overhead is controllable and small compared with the aforementioned benefits. When enhanced by interoperability concepts, the proposed architecture creates an environment for the establishment of an MCS marketplace for bartering and trading of both raw sensor data and aggregated/processed information.",
"title": ""
},
{
"docid": "8e16b62676e5ef36324c738ffd5f737d",
"text": "Virtualization technology has shown immense popularity within embedded systems due to its direct relationship with cost reduction, better resource utilization, and higher performance measures. Efficient hypervisors are required to achieve such high performance measures in virtualized environments, while taking into consideration the low memory footprints as well as the stringent timing constraints of embedded systems. Although there are a number of open-source hypervisors available such as Xen, Linux KVM and OKL4 Micro visor, this is the first paper to present the open-source embedded hypervisor Extensible Versatile hyper Visor (Xvisor) and compare it against two of the commonly used hypervisors KVM and Xen in-terms of comparison factors that affect the whole system performance. Experimental results on ARM architecture prove Xvisor's lower CPU overhead, higher memory bandwidth, lower lock synchronization latency and lower virtual timer interrupt overhead and thus overall enhanced virtualized embedded system performance.",
"title": ""
},
{
"docid": "59c2e1dcf41843d859287124cc655b05",
"text": "Atherosclerotic cardiovascular disease (ASCVD) is the most common cause of death in most Western countries. Nutrition factors contribute importantly to this high risk for ASCVD. Favourable alterations in diet can reduce six of the nine major risk factors for ASCVD, i.e. high serum LDL-cholesterol levels, high fasting serum triacylglycerol levels, low HDL-cholesterol levels, hypertension, diabetes and obesity. Wholegrain foods may be one the healthiest choices individuals can make to lower the risk for ASCVD. Epidemiological studies indicate that individuals with higher levels (in the highest quintile) of whole-grain intake have a 29 % lower risk for ASCVD than individuals with lower levels (lowest quintile) of whole-grain intake. It is of interest that neither the highest levels of cereal fibre nor the highest levels of refined cereals provide appreciable protection against ASCVD. Generous intake of whole grains also provides protection from development of diabetes and obesity. Diets rich in wholegrain foods tend to decrease serum LDL-cholesterol and triacylglycerol levels as well as blood pressure while increasing serum HDL-cholesterol levels. Whole-grain intake may also favourably alter antioxidant status, serum homocysteine levels, vascular reactivity and the inflammatory state. Whole-grain components that appear to make major contributions to these protective effects are: dietary fibre; vitamins; minerals; antioxidants; phytosterols; other phytochemicals. Three servings of whole grains daily are recommended to provide these health benefits.",
"title": ""
},
{
"docid": "f10724859d8982be426891e0d5c44629",
"text": "This paper empirically examines how capital affects a bank’s performance (survival and market share) and how this effect varies across banking crises, market crises, and normal times that occurred in the US over the past quarter century. We have two main results. First, capital helps small banks to increase their probability of survival and market share at all times (during banking crises, market crises, and normal times). Second, capital enhances the performance of medium and large banks primarily during banking crises. Additional tests explore channels through which capital generates these effects. Numerous robustness checks and additional tests are performed. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ac8a0b4ad3f2905bc4e37fa4b0fcbe0a",
"text": "In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS’s operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.",
"title": ""
},
{
"docid": "fbc3afe22ed7c2cc6d60be5fcb906b90",
"text": "The thud of a bouncing ball, the onset of speech as lips open — when visual and audio events occur together, it suggests that there might be a common, underlying event that produced both signals. In this paper, we argue that the visual and audio components of a video signal should be modeled jointly using a fused multisensory representation. We propose to learn such a representation in a self-supervised way, by training a neural network to predict whether video frames and audio are temporally aligned. We use this learned representation for three applications: (a) sound source localization, i.e. visualizing the source of sound in a video; (b) audio-visual action recognition; and (c) on/offscreen audio source separation, e.g. removing the off-screen translator’s voice from a foreign official’s speech. Code, models, and video results are available on our webpage: http://andrewowens.com/multisensory.",
"title": ""
},
{
"docid": "97691304930a85066a15086877473857",
"text": "In the context of modern cryptosystems, a common theme is the creation of distributed trust networks. In most of these designs, permanent storage of a contract is required. However, permanent storage can become a major performance and cost bottleneck. As a result, good code compression schemes are a key factor in scaling these contract based cryptosystems. For this project, we formalize and implement a data structure called the Merkelized Abstract Syntax Tree (MAST) to address both data integrity and compression. MASTs can be used to compactly represent contractual programs that will be executed remotely, and by using some of the properties of Merkle trees, they can also be used to verify the integrity of the code being executed. A concept by the same name has been discussed in the Bitcoin community for a while, the terminology originates from the work of Russel O’Connor and Pieter Wuille, however this discussion was limited to private correspondences. We present a formalization of it and provide an implementation.The project idea was developed with Bitcoin applications in mind, and the experiment we set up uses MASTs in a crypto currency network simulator. Using MASTs in the Bitcoin protocol [2] would increase the complexity (length) of contracts permitted on the network, while simultaneously maintaining the security of broadcasted data. Additionally, contracts may contain privileged, secret branches of execution.",
"title": ""
},
{
"docid": "ae43cf8140bbaf7aa8bc04eceb130fda",
"text": "Network virtualization has become increasingly prominent in recent years. It enables the creation of network infrastructures that are specifically tailored to the needs of distinct network applications and supports the instantiation of favorable environments for the development and evaluation of new architectures and protocols. Despite the wide applicability of network virtualization, the shared use of routing devices and communication channels leads to a series of security-related concerns. It is necessary to provide protection to virtual network infrastructures in order to enable their use in real, large scale environments. In this paper, we present an overview of the state of the art concerning virtual network security. We discuss the main challenges related to this kind of environment, some of the major threats, as well as solutions proposed in the literature that aim to deal with different security aspects.",
"title": ""
}
] | scidocsrr |
9024ad2b909493bd511fc45ef0308be2 | An image-warping VR-architecture: design, implementation and applications | [
{
"docid": "8745e21073db143341e376bad1f0afd7",
"text": "The Virtual Reality (VR) user interface style allows natural hand and body motions to manipulate virtual objects in 3D environments using one or more 3D input devices. This style is best suited to application areas where traditional two-dimensional styles fall short, such as scienti c visualization, architectural visualization, and remote manipulation. Currently, the programming e ort required to produce a VR application is too large, and many pitfalls must be avoided in the creation of successful VR programs. In this paper we describe the Decoupled Simulation Model for creating successful VR applications, and a software system that embodies this model. The MR Toolkit simpli es the development of VR applications by providing standard facilities required by a wide range of VR user interfaces. These facilities include support for distributed computing, head-mounted displays, room geometry management, performance monitoring, hand input devices, and sound feedback. The MR Toolkit encourages programmers to structure their applications to take advantage of the distributed computing capabilities of workstation networks improving the application's performance. In this paper, the motivations and the architecture of the toolkit are outlined, the programmer's view is described, and a simple application is brie y described. CR",
"title": ""
}
] | [
{
"docid": "f8d0929721ba18b2412ca516ac356004",
"text": "Because of the fact that vehicle crash tests are complex and complicated experiments it is advisable to establish their mathematical models. This paper contains an overview of the kinematic and dynamic relationships of a vehicle in a collision. There is also presented basic mathematical model representing a collision together with its analysis. The main part of this paper is devoted to methods of establishing parameters of the vehicle crash model and to real crash data investigation i.e. – creation of a Kelvin model for a real experiment, its analysis and validation. After model’s parameters extraction a quick assessment of an occupant crash severity is done. Key-Words: Modeling, vehicle crash, Kelvin model, data processing.",
"title": ""
},
{
"docid": "6e07a006d4e34f35330c74116762a611",
"text": "Human replicas may elicit unintended cold, eerie feelings in viewers, an effect known as the uncanny valley. Masahiro Mori, who proposed the effect in 1970, attributed it to inconsistencies in the replica's realism with some of its features perceived as human and others as nonhuman. This study aims to determine whether reducing realism consistency in visual features increases the uncanny valley effect. In three rounds of experiments, 548 participants categorized and rated humans, animals, and objects that varied from computer animated to real. Two sets of features were manipulated to reduce realism consistency. (For humans, the sets were eyes-eyelashes-mouth and skin-nose-eyebrows.) Reducing realism consistency caused humans and animals, but not objects, to appear eerier and colder. However, the predictions of a competing theory, proposed by Ernst Jentsch in 1906, were not supported: The most ambiguous representations-those eliciting the greatest category uncertainty-were neither the eeriest nor the coldest.",
"title": ""
},
{
"docid": "a5ac7aa3606ebb683d4d9de5dcd89856",
"text": "Advanced persistent threats (APTs) pose a significant risk to nearly every infrastructure. Due to the sophistication of these attacks, they are able to bypass existing security systems and largely infiltrate the target network. The prevention and detection of APT campaigns is also challenging, because of the fact that the attackers constantly change and evolve their advanced techniques and methods to stay undetected. In this paper we analyze 22 different APT reports and give an overview of the used techniques and methods. The analysis is focused on the three main phases of APT campaigns that allow to identify the relevant characteristics of such attacks. For each phase we describe the most commonly used techniques and methods. Through this analysis we could reveal different relevant characteristics of APT campaigns, for example that the usage of 0-day exploit is not common for APT attacks. Furthermore, the analysis shows that the dumping of credentials is a relevant step in the lateral movement phase for most APT campaigns. Based on the identified characteristics, we also propose concrete prevention and detection approaches that make it possible to identify crucial malicious activities that are performed during APT campaigns.",
"title": ""
},
{
"docid": "27ef8bac566dbba418870036ed555b1a",
"text": "Seemingly unrelated regression (SUR) models are useful in studying the interactions among different variables. In a high dimensional setting or when applied to large panel of time series, these models require a large number of parameters to be estimated and suffer of inferential problems. To avoid overparametrization and overfitting issues, we propose a hierarchical Dirichlet process prior for SUR models, which allows shrinkage of SUR coefficients toward multiple locations and identification of group of coefficients. We propose a two-stage hierarchical prior distribution, where the first stage of the hierarchy consists in a Lasso conditionally independent prior distribution of the NormalGamma family for the SUR coefficients. The second stage is given by a random mixture distribution for the Normal-Gamma hyperparameters, which allows for parameter parsimony through two components: the first one is a random Dirac point-mass distribution, which induces sparsity in the SUR coefficients; the second is a Dirichlet process prior, which allows for clustering of the SUR coefficients. Our sparse SUR model with multiple locations, scales and shapes includes the Vector autoregressive models (VAR) and dynamic panel models as special cases. We consider an international business cycle applications to show the effectiveness of our model and inference approach. Our new multiple shrinkage prior model allows us to better understand shock transmission phenomena, to extract coloured networks and to classify the linkages strenght. The empirical results represent a different point of view on international business cycles providing interesting new findings in the relationship between core and pheriphery countries.",
"title": ""
},
{
"docid": "5d40cae84395cc94d68bd4352383d66b",
"text": "Scalable High Efficiency Video Coding (SHVC) is the extension of the High Efficiency Video Coding (HEVC). This standard is developed to ameliorate the coding efficiency for the spatial and quality scalability. In this paper, we investigate a survey for SHVC extension. We describe also its types and explain the different additional coding tools that further improve the Enhancement Layer (EL) coding efficiency. Furthermore, we assess through experimental results the performance of the SHVC for different coding configurations. The effectiveness of the SHVC was demonstrated, using two layers, by comparing its coding adequacy compared to simulcast configuration and HEVC for enhancement layer using HM16 for several test sequences and coding conditions.",
"title": ""
},
{
"docid": "a5f9b7b7b25ccc397acde105c39c3d9d",
"text": "Processors with multiple cores and complex cache coherence protocols are widely employed to improve the overall performance. It is a major challenge to verify the correctness of a cache coherence protocol since the number of reachable states grows exponentially with the number of cores. In this paper, we propose an efficient test generation technique, which can be used to achieve full state and transition coverage in simulation based verification for a wide variety of cache coherence protocols. Based on effective analysis of the state space structure, our method can generate more efficient test sequences (50% shorter) compared with tests generated by breadth first search. Moreover, our proposed approach can generate tests on-the-fly due to its space efficient design.",
"title": ""
},
{
"docid": "590ad5ce089e824d5e9ec43c54fa3098",
"text": "The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.",
"title": ""
},
{
"docid": "e46943cc1c73a56093d4194330d52d52",
"text": "This paper deals with the compact modeling of an emerging technology: the carbon nanotube field-effect transistor (CNTFET). The paper proposed two design-oriented compact models, the first one for CNTFET with a classical behavior (MOSFET-like CNTFET), and the second one for CNTFET with an ambipolar behavior (Schottky-barrier CNTFET). Both models have been compared with exact numerical simulations and then implemented in VHDL-AMS",
"title": ""
},
{
"docid": "30e15e8a3e6eaf424b2f994d2631ac37",
"text": "This paper presents a volumetric stereo and silhouette fusion algorithm for acquiring high quality models from multiple calibrated photographs. Our method is based on computing and merging depth maps. Different from previous methods of this category, the silhouette information is also applied in our algorithm to recover the shape information on the textureless and occluded areas. The proposed algorithm starts by computing visual hull using a volumetric method in which a novel projection test method is proposed for visual hull octree construction. Then, the depth map of each image is estimated by an expansion-based approach that returns a 3D point cloud with outliers and redundant information. After generating an oriented point cloud from stereo by rejecting outlier, reducing scale, and estimating surface normal for the depth maps, another oriented point cloud from silhouette is added by carving the visual hull octree structure using the point cloud from stereo to restore the textureless and occluded surfaces. Finally, Poisson Surface Reconstruction approach is applied to convert the oriented point cloud both from stereo and silhouette into a complete and accurate triangulated mesh model. The proposed approach has been implemented and the performance of the approach is demonstrated on several real data sets, along with qualitative comparisons with the state-of-the-art image-based modeling techniques according to the Middlebury benchmark.",
"title": ""
},
{
"docid": "1fcd6f0c91522a91fa05b0d969f8eec1",
"text": "Nonnegative matrix factorization (NMF) is a popular method for multivariate analysis of nonnegative data, the goal of which is to decompose a data matrix into a product of two factor matrices with all entries in factor matrices restricted to be nonnegative. NMF was shown to be useful in a task of clustering (especially document clustering), but in some cases NMF produces the results inappropriate to the clustering problems. In this paper, we present an algorithm for orthogonal nonnegative matrix factorization, where an orthogonality constraint is imposed on the nonnegative decomposition of a term-document matrix. The result of orthogonal NMF can be clearly interpreted for the clustering problems, and also the performance of clustering is usually better than that of the NMF. We develop multiplicative updates directly from true gradient on Stiefel manifold, whereas existing algorithms consider additive orthogonality constraints. Experiments on several different document data sets show our orthogonal NMF algorithms perform better in a task of clustering, compared to the standard NMF and an existing orthogonal NMF.",
"title": ""
},
{
"docid": "e0f0ccb0e1c2f006c5932f6b373fb081",
"text": "This paper proposes a methodology to be used in the segmentation of infrared thermography images for the detection of bearing faults in induction motors. The proposed methodology can be a helpful tool for preventive and predictive maintenance of the induction motor. This methodology is based on manual threshold image processing to obtain a segmentation of an infrared thermal image, which is used for the detection of critical points known as hot spots on the system under test. From these hot spots, the parameters of interest that describe the thermal behavior of the induction motor were obtained. With the segmented image, it is possible to compare and analyze the thermal conditions of the system.",
"title": ""
},
{
"docid": "4f296caa2ee4621a8e0858bfba701a3b",
"text": "This paper considers the problem of assessing visual aesthetic quality with semantic information. We cast the assessment problem as the main task among a multi-task deep model, and argue that semantic recognition offers the key to addressing this problem. Based on convolutional neural networks, we propose a general multi-task framework with four different structures. In each structure, aesthetic quality assessment task and semantic recognition task are leveraged, and different features are explored to improve the quality assessment. Moreover, an effective strategy of keeping a balanced effect between the semantic task and aesthetic task is developed to optimize the parameters of our framework. The correlation analysis among the tasks validates the importance of the semantic recognition in aesthetic quality assessment. Extensive experiments verify the effectiveness of the proposed multi-task framework, and further corroborate the",
"title": ""
},
{
"docid": "bf85db5489a61b5fca8d121de198be97",
"text": "In this paper, we propose a novel recursive recurrent neural network (R2NN) to model the end-to-end decoding process for statistical machine translation. R2NN is a combination of recursive neural network and recurrent neural network, and in turn integrates their respective capabilities: (1) new information can be used to generate the next hidden state, like recurrent neural networks, so that language model and translation model can be integrated naturally; (2) a tree structure can be built, as recursive neural networks, so as to generate the translation candidates in a bottom up manner. A semi-supervised training approach is proposed to train the parameters, and the phrase pair embedding is explored to model translation confidence directly. Experiments on a Chinese to English translation task show that our proposed R2NN can outperform the stateof-the-art baseline by about 1.5 points in BLEU.",
"title": ""
},
{
"docid": "8af844944f6edee4c271d73a552dc073",
"text": "Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings.",
"title": ""
},
{
"docid": "3194a0dd979b668bb25afb10260c30d2",
"text": "An octa-band antenna for 5.7-in mobile phones with the size of 80 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times6$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times5.8$ </tex-math></inline-formula> mm is proposed and studied. The proposed antenna is composed of a coupled line, a monopole branch, and a ground branch. By using the 0.25-, 0.5-, and 0.75-wavelength modes, the lower band (704–960 MHz) and the higher band (1710–2690 MHz) are covered. The working mechanism is analyzed based on the S-parameters and the surface current distributions. The attractive merits of the proposed antenna are that the nonground portion height is only 6 mm and any lumped element is not used. A prototype of the proposed antenna is fabricated and measured. The measured −6 dB impedance bandwidths are 350 MHz (0.67–1.02 GHz) and 1.27 GHz (1.65–2.92 GHz) at the lower and higher bands, respectively, which can cover the LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands. The measured patterns, gains, and efficiencies are presented.",
"title": ""
},
{
"docid": "38f6aaf5844ddb6e4ed0665559b7f813",
"text": "A novel dual-broadband multiple-input-multiple-output (MIMO) antenna system is developed. The MIMO antenna system consists of two dual-broadband antenna elements, each of which comprises two opened loops: an outer loop and an inner loop. The opened outer loop acts as a half-wave dipole and is excited by electromagnetic coupling from the inner loop, leading to a broadband performance for the lower band. The opened inner loop serves as two monopoles. A combination of the two monopoles and the higher modes from the outer loop results in a broadband performance for the upper band. The bandwidths (return loss >;10 dB) achieved for the dual-broadband antenna element are 1.5-2.8 GHz (~ 60%) for the lower band and 4.7-8.5 GHz (~ 58\\%) for the upper band. Two U-shaped slots are introduced to reduce the coupling between the two dual-broadband antenna elements. The isolation achieved is higher than 15 dB in the lower band and 20 dB in the upper band, leading to an envelope correlation coefficient of less than 0.01. The dual-broadband MIMO antenna system has a compact volume of 50×17×0.8 mm3, suitable for GSM/UMTS/LTE and WLAN communication handsets.",
"title": ""
},
{
"docid": "5dec9852efc32d0a9b93cd173573abf0",
"text": "Magnitudes and timings of kinematic variables have often been used to investigate technique. Where large inter-participant differences exist, as in basketball, analysis of intra-participant variability may provide an alternative indicator of good technique. The aim of the present study was to investigate the joint kinematics and coordination-variability between missed and successful (swishes) free throw attempts. Collegiate level basketball players performed 20 free throws, during which ball release parameters and player kinematics were recorded. For each participant, three misses and three swishes were randomly selected and analysed. Margins of error were calculated based on the optimal-minimum-speed principle. Differences in outcome were distinguished by ball release speeds statistically lower than the optimal speed (misses -0.12 +/- 0.10m s(-1); swishes -0.02 +/- 0.07m s(-1); P < 0.05). No differences in wrist linear velocity were detected, but as the elbow influences the wrist through velocity-dependent-torques, elbow-wrist angle-angle coordination-variability was quantified using vector-coding and found to increase in misses during the last 0.01 s before ball release (P < 0.05). As the margin of error on release parameters is small, the coordination-variability is small, but the increased coordination-variability just before ball release for misses is proposed to arise from players perceiving the technique to be inappropriate and trying to correct the shot. The synergy or coupling relationship between the elbow and wrist angles to generate the appropriate ball speed is proposed as the mechanism determining success of free-throw shots in experienced players.",
"title": ""
},
{
"docid": "dd5c0dc27c0b195b1b8f2c6e6a5cea88",
"text": "The increasing dependence on information networks for business operations has focused managerial attention on managing risks posed by failure of these networks. In this paper, we develop models to assess the risk of failure on the availability of an information network due to attacks that exploit software vulnerabilities. Software vulnerabilities arise from software installed on the nodes of the network. When the same software stack is installed on multiple nodes on the network, software vulnerabilities are shared among them. These shared vulnerabilities can result in correlated failure of multiple nodes resulting in longer repair times and greater loss of availability of the network. Considering positive network effects (e.g., compatibility) alone without taking the risks of correlated failure and the resulting downtime into account would lead to overinvestment in homogeneous software deployment. Exploiting characteristics unique to information networks, we present a queuing model that allows us to quantify downtime loss faced by a rm as a function of (1) investment in security technologies to avert attacks, (2) software diversification to limit the risk of correlated failure under attacks, and (3) investment in IT resources to repair failures due to attacks. The novelty of this method is that we endogenize the failure distribution and the node correlation distribution, and show how the diversification strategy and other security measures/investments may impact these two distributions, which in turn determine the security loss faced by the firm. We analyze and discuss the effectiveness of diversification strategy under different operating conditions and in the presence of changing vulnerabilities. We also take into account the benefits and costs of a diversification strategy. Our analysis provides conditions under which diversification strategy is advantageous.",
"title": ""
},
{
"docid": "af5a2ad28ab61015c0344bf2e29fe6a7",
"text": "Recent years have shown that more than ever governments and intelligence agencies try to control and bypass the cryptographic means used for the protection of data. Backdooring encryption algorithms is considered as the best way to enforce cryptographic control. Until now, only implementation backdoors (at the protocol/implementation/management level) are generally considered. In this paper we propose to address the most critical issue of backdoors: mathematical backdoors or by-design backdoors, which are put directly at the mathematical design of the encryption algorithm. While the algorithm may be totally public, proving that there is a backdoor, identifying it and exploiting it, may be an intractable problem. We intend to explain that it is probably possible to design and put such backdoors. Considering a particular family (among all the possible ones), we present BEA-1, a block cipher algorithm which is similar to the AES and which contains a mathematical backdoor enabling an operational and effective cryptanalysis. The BEA-1 algorithm (80-bit block size, 120-bit key, 11 rounds) is designed to resist to linear and differential cryptanalyses. A challenge will be proposed to the cryptography community soon. Its aim is to assess whether our backdoor is easily detectable and exploitable or not.",
"title": ""
}
] | scidocsrr |
41b4bd5410ae9034056f7a4453a51680 | Amulet: An Energy-Efficient, Multi-Application Wearable Platform | [
{
"docid": "1f95cc7adafe07ad9254359ab405a980",
"text": "Event-driven programming is a popular model for writing programs for tiny embedded systems and sensor network nodes. While event-driven programming can keep the memory overhead down, it enforces a state machine programming style which makes many programs difficult to write, maintain, and debug. We present a novel programming abstraction called protothreads that makes it possible to write event-driven programs in a thread-like style, with a memory overhead of only two bytes per protothread. We show that protothreads significantly reduce the complexity of a number of widely used programs previously written with event-driven state machines. For the examined programs the majority of the state machines could be entirely removed. In the other cases the number of states and transitions was drastically decreased. With protothreads the number of lines of code was reduced by one third. The execution time overhead of protothreads is on the order of a few processor cycles.",
"title": ""
},
{
"docid": "5fd6462e402e3a3ab1e390243d80f737",
"text": "We present TinyOS, a flexible, application-specific operating system for sensor networks. Sensor networks consist of (potentially) thousands of tiny, low-power nodes, each of which execute concurrent, reactive programs that must operate with severe memory and power constraints. The sensor network challenges of limited resources, event-centric concurrent applications, and low-power operation drive the design of TinyOS. Our solution combines flexible, fine-grain components with an execution model that supports complex yet safe concurrent operations. TinyOS meets these challenges well and has become the platform of choice for sensor network research; it is in use by over a hundred groups worldwide, and supports a broad range of applications and research topics. We provide a qualitative and quantitative evaluation of the system, showing that it supports complex, concurrent programs with very low memory requirements (many applications fit within 16KB of memory, and the core OS is 400 bytes) and efficient, low-power operation. We present our experiences with TinyOS as a platform for sensor network innovation and applications.",
"title": ""
},
{
"docid": "9bcc81095c32ea39de23217983d33ddc",
"text": "The Internet of Things (IoT) is characterized by heterogeneous devices. They range from very lightweight sensors powered by 8-bit microcontrollers (MCUs) to devices equipped with more powerful, but energy-efficient 32-bit processors. Neither a traditional operating system (OS) currently running on Internet hosts, nor typical OS for sensor networks are capable to fulfill the diverse requirements of such a wide range of devices. To leverage the IoT, redundant development should be avoided and maintenance costs should be reduced. In this paper we revisit the requirements for an OS in the IoT. We introduce RIOT OS, an OS that explicitly considers devices with minimal resources but eases development across a wide range of devices. RIOT OS allows for standard C and C++ programming, provides multi-threading as well as real-time capabilities, and needs only a minimum of 1.5 kB of RAM.",
"title": ""
}
] | [
{
"docid": "7e1e475f5447894a6c246e7d47586c4b",
"text": "Between 1983 and 2003 forty accidental autoerotic deaths (all males, 13-79 years old) have been investigated at the Institute of Legal Medicine in Hamburg. Three cases with a rather unusual scenery are described in detail: (1) a 28-year-old fireworker was found hanging under a bridge in a peculiar bound belt system. The autopsy and the reconstruction revealed signs of asphyxiation, feminine underwear, and several layers of plastic clothing. (2) A 16-year-old pupil dressed with feminine plastic and rubber utensils fixed and strangulated himself with an electric wire. (3) A 28-year-old handicapped man suffered from progressive muscular dystrophy and was nearly unable to move. His bizarre sexual fantasies were exaggerating: he induced a nurse to draw plastic bags over his body, close his mouth with plastic strips, and put him in a rubbish container where he died from suffocation.",
"title": ""
},
{
"docid": "77f3dfeba56c3731fda1870ce48e1aca",
"text": "The organicist view of society is updated by incorporating concepts from cybernetics, evolutionary theory, and complex adaptive systems. Global society can be seen as an autopoietic network of self-producing components, and therefore as a living system or ‘superorganism’. Miller's living systems theory suggests a list of functional components for society's metabolism and nervous system. Powers' perceptual control theory suggests a model for a distributed control system implemented through the market mechanism. An analysis of the evolution of complex, networked systems points to the general trends of increasing efficiency, differentiation and integration. In society these trends are realized as increasing productivity, decreasing friction, increasing division of labor and outsourcing, and increasing cooperativity, transnational mergers and global institutions. This is accompanied by increasing functional autonomy of individuals and organisations and the decline of hierarchies. The increasing complexity of interactions and instability of certain processes caused by reduced friction necessitate a strengthening of society's capacity for information processing and control, i.e. its nervous system. This is realized by the creation of an intelligent global computer network, capable of sensing, interpreting, learning, thinking, deciding and initiating actions: the ‘global brain’. Individuals are being integrated ever more tightly into this collective intelligence. Although this image may raise worries about a totalitarian system that restricts individual initiaSocial Evolution & History / March 2007 58 tive, the superorganism model points in the opposite direction, towards increasing freedom and diversity. The model further suggests some specific futurological predictions for the coming decades, such as the emergence of an automated distribution network, a computer immune system, and a global consensus about values and standards.",
"title": ""
},
{
"docid": "43bab96fad8afab1ea350e327a8f7aec",
"text": "The traditional databases are not capable of handling unstructured data and high volumes of real-time datasets. Diverse datasets are unstructured lead to big data, and it is laborious to store, manage, process, analyze, visualize, and extract the useful insights from these datasets using traditional database approaches. However, many technical aspects exist in refining large heterogeneous datasets in the trend of big data. This paper aims to present a generalized view of complete big data system which includes several stages and key components of each stage in processing the big data. In particular, we compare and contrast various distributed file systems and MapReduce-supported NoSQL databases concerning certain parameters in data management process. Further, we present distinct distributed/cloud-based machine learning (ML) tools that play a key role to design, develop and deploy data models. The paper investigates case studies on distributed ML tools such as Mahout, Spark MLlib, and FlinkML. Further, we classify analytics based on the type of data, domain, and application. We distinguish various visualization tools pertaining three parameters: functionality, analysis capabilities, and supported development environment. Furthermore, we systematically investigate big data tools and technologies (Hadoop 3.0, Spark 2.3) including distributed/cloud-based stream processing tools in a comparative approach. Moreover, we discuss functionalities of several SQL Query tools on Hadoop based on 10 parameters. Finally, We present some critical points relevant to research directions and opportunities according to the current trend of big data. Investigating infrastructure tools for big data with recent developments provides a better understanding that how different tools and technologies apply to solve real-life applications.",
"title": ""
},
{
"docid": "c6aa0e5f93d02fdd07e55dfa62aac6bc",
"text": "While CNNs naturally lend themselves to densely sampled data, and sophisticated implementations are available, they lack the ability to efficiently process sparse data. In this work we introduce a suite of tools that exploit sparsity in both the feature maps and the filter weights, and thereby allow for significantly lower memory footprints and computation times than the conventional dense framework when processing data with a high degree of sparsity. Our scheme provides (i) an efficient GPU implementation of a convolution layer based on direct, sparse convolution; (ii) a filter step within the convolution layer, which we call attention, that prevents fill-in, i.e., the tendency of convolution to rapidly decrease sparsity, and guarantees an upper bound on the computational resources; and (iii) an adaptation of the backpropagation algorithm, which makes it possible to combine our approach with standard learning frameworks, while still exploiting sparsity in the data and the model.",
"title": ""
},
{
"docid": "894e945c9bb27f5464d1b8f119139afc",
"text": "Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p = .0012) for our model C=0.75 (95% CI: 0.70 - 0.79) than the human benchmark of C=0.59 (95% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival.",
"title": ""
},
{
"docid": "0e6bdfbfb3d47042a3a4f38c0260180c",
"text": "Named Entity Recognition is an important task but is still relatively new for Vietnamese. It is partly due to the lack of a large annotated corpus. In this paper, we present a systematic approach in building a named entity annotated corpus while at the same time building rules to recognize Vietnamese named entities. The resulting open source system achieves an F-measure of 83%, which is better compared to existing Vietnamese NER systems. © 2010 Springer-Verlag Berlin Heidelberg. Index",
"title": ""
},
{
"docid": "c898f6186ff15dff41dcb7b3376b975d",
"text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.",
"title": ""
},
{
"docid": "caae1bbaf151f876f102a1e3e6bd5266",
"text": "It is well-known that information and communication technologies enable many tasks in the context of precision agriculture. In fact, more and more farmers and food and agriculture companies are using precision agriculture-based systems to enhance not only their products themselves, but also their means of production. Consequently, problems arising from large amounts of data management and processing are arising. It would be very useful to have an infrastructure that allows information and agricultural tasks to be efficiently shared and handled. The cloud computing paradigm offers a solution. In this study, a cloud-based software architecture is proposed with the aim of enabling a complete crop management system to be deployed and validated. Such architecture includes modules developed by using Google App Engine, which allows the information to be easily retrieved and processed and agricultural tasks to be properly defined and planned. Additionally, Google’s Datastore (which ensures a high scalability degree), hosts both information that describes such agricultural tasks and agronomic data. The architecture has been validated in a system that comprises a wireless sensor network with fixed nodes and a mobile node on an unmanned aerial vehicle (UAV), deployed in an agricultural farm in the Region of Murcia (Spain). Such a network allows soil water and plant status to be monitored. The UAV (capable of executing missions defined by an administrator) is useful for acquiring visual information in an autonomous manner (under operator supervision, if needed). The system performance has been analysed and results that demonstrate the benefits of using the proposed architecture are detailed.",
"title": ""
},
{
"docid": "414f3647551a4cadeb05143d30230dec",
"text": "Future cellular networks are faced with the challenge of coping with significant traffic growth without increasing operating costs. Network virtualization and Software Defined Networking (SDN) are emerging solutions for fine-grained control and management of networks. In this article, we present a new dynamic tunnel switching technique for SDN-based cellular core networks. The technique introduces a virtualized Evolved Packet Core (EPC) gateway with the capability to select and dynamically switch the user plane processing element for each user. Dynamic GPRS Tunneling Protocol (GTP) termination enables switching the mobility anchor of an active session between a cloud environment, where general purpose hardware is in use, and a fast path implemented with dedicated hardware. We describe a prototype implementation of the technique based on an OpenStack cloud, an OpenFlow controller with GTP tunnel switching, and a dedicated fast path element.",
"title": ""
},
{
"docid": "cec9f586803ffc8dc5868f6950967a1f",
"text": "This report aims to summarize the field of technological forecasting (TF), its techniques and applications by considering the following questions: • What are the purposes of TF? • Which techniques are used for TF? • What are the strengths and weaknesses of these techniques / how do we evaluate their quality? • Do we need different TF techniques for different purposes/technologies? We also present a brief analysis of how TF is used in practice. We analyze how corporate decisions, such as investing millions of dollars to a new technology like solar energy, are being made and we explore if funding allocation decisions are based on “objective, repeatable, and quantifiable” decision parameters. Throughout the analysis, we compare the bibliometric and semantic-enabled approach of the MIT/MIST Collaborative research project “Technological Forecasting using Data Mining and Semantics” (TFDMS) with the existing studies / practices of TF and where TFDMS fits in and how it will contribute to the general TF field.",
"title": ""
},
{
"docid": "033d7d924481a9429c03bb4bcc7b12fc",
"text": "BACKGROUND\nThis study investigates the variations of Heart Rate Variability (HRV) due to a real-life stressor and proposes a classifier based on nonlinear features of HRV for automatic stress detection.\n\n\nMETHODS\n42 students volunteered to participate to the study about HRV and stress. For each student, two recordings were performed: one during an on-going university examination, assumed as a real-life stressor, and one after holidays. Nonlinear analysis of HRV was performed by using Poincaré Plot, Approximate Entropy, Correlation dimension, Detrended Fluctuation Analysis, Recurrence Plot. For statistical comparison, we adopted the Wilcoxon Signed Rank test and for development of a classifier we adopted the Linear Discriminant Analysis (LDA).\n\n\nRESULTS\nAlmost all HRV features measuring heart rate complexity were significantly decreased in the stress session. LDA generated a simple classifier based on the two Poincaré Plot parameters and Approximate Entropy, which enables stress detection with a total classification accuracy, a sensitivity and a specificity rate of 90%, 86%, and 95% respectively.\n\n\nCONCLUSIONS\nThe results of the current study suggest that nonlinear HRV analysis using short term ECG recording could be effective in automatically detecting real-life stress condition, such as a university examination.",
"title": ""
},
{
"docid": "e1a1faf5d2121a3d5cd993d0f9c257a5",
"text": "This paper is the product of an area-exam study. It intends to explain the concept of ontology in the context of knowledge engineering research, which is a sub-area of artiicial intelligence research. It introduces the state of the art on methodologies and tools for building ontologies. It also tries to point out some possible future directions for ontology research.",
"title": ""
},
{
"docid": "ec97d6daf87e79dfc059a022d38e4ff2",
"text": "There are numerous passive contrast sensing autofocus algorithms that are well documented in literature, but some aspects of their comparative performance have not been widely researched. This study explores the relative merits of a set of autofocus algorithms via examining them against a variety of scene conditions. We create a statistics engine that considers a scene taken through a range of focal values and then computes the best focal position using each autofocus algorithm. The process is repeated across a survey of test scenes containing different representative conditions. The results are assessed against focal positions which are determined by manually focusing the scenes. Through examining these results, we then derive conclusions about the relative merits of each autofocus algorithm with respect to the criteria accuracy and unimodality. Our study concludes that the basic 2D spatial gradient measurement approaches yield the best autofocus results in terms of accuracy and unimodality.",
"title": ""
},
{
"docid": "c63ce594f3e940783ae24494a6cb1aa9",
"text": "In this paper, a new deep reinforcement learning based augmented general sequence tagging system is proposed. The new system contains two parts: a deep neural network (DNN) based sequence tagging model and a deep reinforcement learning (DRL) based augmented tagger. The augmented tagger helps improve system performance by modeling the data with minority tags. The new system is evaluated on SLU and NLU sequence tagging tasks using ATIS and CoNLL2003 benchmark datasets, to demonstrate the new system’s outstanding performance on general tagging tasks. Evaluated by F1 scores, it shows that the new system outperforms the current state-of-the-art model on ATIS dataset by 1.9 % and that on CoNLL-2003 dataset by 1.4 %.",
"title": ""
},
{
"docid": "64c06bffe4aeff54fbae9d87370e552c",
"text": "Social networking sites occupy increasing fields of daily life and act as important communication channels today. But recent research also discusses the dark side of these sites, which expresses in form of stress, envy, addiction or even depression. Nevertheless, there must be a reason why people use social networking sites, even though they face related risks. One reason is human curiosity that tempts users to behave like this. The research on hand presents the impact of curiosity on user acceptance of social networking sites, which is theorized and empirically evaluated by using the technology acceptance model and a quantitative study among Facebook users. It further reveals that especially two types of human curiosity, epistemic and interpersonal curiosity, influence perceived usefulness and perceived enjoyment, and with it technology acceptance.",
"title": ""
},
{
"docid": "5846c9761ec90040feaf71656401d6dd",
"text": "Internet of Things (IoT) is an emergent technology that provides a promising opportunity to improve industrial systems by the smartly use of physical objects, systems, platforms and applications that contain embedded technology to communicate and share intelligence with each other. In recent years, a great range of industrial IoT applications have been developed and deployed. Among these applications, the Water and Oil & Gas Distribution System is tremendously important considering the huge amount of fluid loss caused by leakages and other possible hydraulic failures. Accordingly, to design an accurate Fluid Distribution Monitoring System (FDMS) represents a critical task that imposes a serious study and an adequate planning. This paper reviews the current state-of-the-art of IoT, major IoT applications in industries and focus more on the Industrial IoT FDMS (IIoT FDMS).",
"title": ""
},
{
"docid": "5edc36b296a14950b366e0b3c4ba570c",
"text": "e ecient management of data is an important prerequisite for realising the potential of the Internet of ings (IoT). Two issues given the large volume of structured time-series IoT data are, addressing the diculties of data integration between heterogeneous ings and improving ingestion and query performance across databases on both resource-constrained ings and in the cloud. In this paper, we examine the structure of public IoT data and discover that the majority exhibit unique at, wide and numerical characteristics with a mix of evenly and unevenly-spaced time-series. We investigate the advances in time-series databases for telemetry data and combine these ndings with microbenchmarks to determine the best compression techniques and storage data structures to inform the design of a novel solution optimised for IoT data. A query translation method with low overhead even on resource-constrained ings allows us to utilise rich data models like the Resource Description Framework (RDF) for interoperability and data integration on top of the optimised storage. Our solution, TritanDB, shows an order of magnitude performance improvement across both ings and cloud hardware on many state-of-the-art databases within IoT scenarios. Finally, we describe how TritanDB supports various analyses of IoT time-series data like forecasting.",
"title": ""
},
{
"docid": "1d3318884ffe201e50312b68bf51956a",
"text": "This paper explores alternate algorithms, reward functions and feature sets for performing multi-document summarization using reinforcement learning with a high focus on reproducibility. We show that ROUGE results can be improved using a unigram and bigram similarity metric when training a learner to select sentences for summarization. Learners are trained to summarize document clusters based on various algorithms and reward functions and then evaluated using ROUGE. Our experiments show a statistically significant improvement of 1.33%, 1.58%, and 2.25% for ROUGE-1, ROUGE-2 and ROUGEL scores, respectively, when compared with the performance of the state of the art in automatic summarization with reinforcement learning on the DUC2004 dataset. Furthermore query focused extensions of our approach show an improvement of 1.37% and 2.31% for ROUGE-2 and ROUGE-SU4 respectively over query focused extensions of the state of the art with reinforcement learning on the DUC2006 dataset.",
"title": ""
},
{
"docid": "bc42c1e0bc130ea41af09db0d3ec0c8d",
"text": "In Western societies, the population grows old, and we must think about solutions to help them to stay at home in a secure environment. By providing a specific analysis of people behavior, computer vision offers a good solution for healthcare systems, and particularly for fall detection. This demo will show the results of a new method to detect falls using a monocular camera. The main characteristic of this method is the use of head 3D trajectories for fall detection.",
"title": ""
},
{
"docid": "ec37e61fcac2639fa6e605b362f2a08d",
"text": "Keyphrases that efficiently summarize a document’s content are used in various document processing and retrieval tasks. Current state-of-the-art techniques for keyphrase extraction operate at a phrase-level and involve scoring candidate phrases based on features of their component words. In this paper, we learn keyphrase taggers for research papers using token-based features incorporating linguistic, surfaceform, and document-structure information through sequence labeling. We experimentally illustrate that using withindocument features alone, our tagger trained with Conditional Random Fields performs on-par with existing state-of-the-art systems that rely on information from Wikipedia and citation networks. In addition, we are also able to harness recent work on feature labeling to seamlessly incorporate expert knowledge and predictions from existing systems to enhance the extraction performance further. We highlight the modeling advantages of our keyphrase taggers and show significant performance improvements on two recently-compiled datasets of keyphrases from Computer Science research papers.",
"title": ""
}
] | scidocsrr |
33115c38cc10bfa1cf19b6d28490f4bb | Word Sense Induction by Community Detection | [
{
"docid": "b9cf32ef9364f55c5f59b4c6a9626656",
"text": "Graph-based methods have gained attention in many areas of Natural Language Processing (NLP) including Word Sense Disambiguation (WSD), text summarization, keyword extraction and others. Most of the work in these areas formulate their problem in a graph-based setting and apply unsupervised graph clustering to obtain a set of clusters. Recent studies suggest that graphs often exhibit a hierarchical structure that goes beyond simple flat clustering. This paper presents an unsupervised method for inferring the hierarchical grouping of the senses of a polysemous word. The inferred hierarchical structures are applied to the problem of word sense disambiguation, where we show that our method performs significantly better than traditional graph-based methods and agglomerative clustering yielding improvements over state-of-the-art WSD systems based on sense induction.",
"title": ""
},
{
"docid": "f3b176b37ccdd616eace518f9cf3af63",
"text": "Word Sense Induction (WSI) is the task of identifying the different senses (uses) of a target word in a given text. Traditional graph-based approaches create and then cluster a graph, in which each vertex corresponds to a word that co-occurs with the target word, and edges between vertices are weighted based on the co-occurrence frequency of their associated words. In contrast, in our approach each vertex corresponds to a collocation that co-occurs with the target word, and edges between vertices are weighted based on the co-occurrence frequency of their associated collocations. A smoothing technique is applied to identify more edges between vertices and the resulting graph is then clustered. Our evaluation under the framework of SemEval-2007 WSI task shows the following: (a) our approach produces less sense-conflating clusters than those produced by traditional graph-based approaches, (b) our approach outperforms the existing state-of-the-art results.",
"title": ""
}
] | [
{
"docid": "41e188c681516862a69fe8e90c58a618",
"text": "This paper explores the use of Information-Centric Networking (ICN) to support management operations in IoT deployments, presenting the design of a flexible architecture that allows the appropriate operation of IoT devices within a delimited ICN network domain. Our architecture has been designed with special consideration to naming, interoperation, security and energy-efficiency requirements. We theoretically assess the communication overhead introduced by the security procedures of our solution, both at IoT devices and clients. Additionally, we show the potential of our architecture to accommodate enhanced management applications, focusing on a specific use case, i.e. an information freshness service level agreement application. Finally, we present a proof-of-concept implementation of our architecture over an Arduino board, and we use it to carry out a set of experiments that validate the feasibility of our solution. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3120b862a5957b0deeec5345376b74d0",
"text": "This paper deals with automatic cartoon colorization. This is a hard issue, since it is an ill-posed problem that usually requires user intervention to achieve high quality. Motivated by the recent successes in natural image colorization based on deep learning techniques, we investigate the colorization problem at the cartoon domain using Convolutional Neural Network. To our best knowledge, no existing papers or research studies address this problem using deep learning techniques. Here we investigate a deep Convolutional Neural Network based automatic color filling method for cartoons.",
"title": ""
},
{
"docid": "1d162153d7bbaf63991f79aa92eeae6e",
"text": "We describe a contextual parser for the Robot Commands Treebank, a new crowdsourced resource. In contrast to previous semantic parsers that select the most-probable parse, we consider the different problem of parsing using additional situational context to disambiguate between different readings of a sentence. We show that multiple semantic analyses can be searched using dynamic programming via interaction with a spatial planner, to guide the parsing process. We are able to parse sentences in near linear-time by ruling out analyses early on that are incompatible with spatial context. We report a 34% upper bound on accuracy, as our planner correctly processes spatial context for 3,394 out of 10,000 sentences. However, our parser achieves a 96.53% exactmatch score for parsing within the subset of sentences recognized by the planner, compared to 82.14% for a non-contextual parser.",
"title": ""
},
{
"docid": "d4a893a151ce4a3dee0e5fde0ba11b7b",
"text": "Software-Defined Radio (SDR) technology has already cleared up passive radar applications. Nevertheless, until now, no work has pointed how this flexible radio could fully and directly exploit pulsed radar signals. This paper aims at introducing this field of study presenting not only an SDR-based radar-detector but also how it could be conceived on a low power consumption device as a tablet, which would make convenient a passive network to identify and localize aircraft as a redundancy to the conventional air traffic control in adverse situations. After a brief approach of the main features of the equipment, as well as of the developed processing script, indoor experiments took place. Their results demonstrate that the processing of pulsed radar signal allows emitters to be identified when a local database is confronted. All this commitment has contributed to a greater proposal of an Electronic Intelligence (ELINT) or Electronic Support Measures (ESM) system embedded on a tablet, presenting characteristics of portability and furtiveness. This study is suggested for the areas of Software-Defined Radio, Electronic Warfare, Electromagnetic Devices and Radar Signal Processing.",
"title": ""
},
{
"docid": "9245a5a3daad7fbce9416b1dedb9e9ab",
"text": "BACKGROUND\nDespite the growing epidemic of heart failure with preserved ejection fraction (HFpEF), no valid measure of patients' health status (symptoms, function, and quality of life) exists. We evaluated the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated measure of HF with reduced EF, in patients with HFpEF.\n\n\nMETHODS AND RESULTS\nUsing a prospective HF registry, we dichotomized patients into HF with reduced EF (EF≤ 40) and HFpEF (EF≥50). The associations between New York Heart Association class, a commonly used criterion standard, and KCCQ Overall Summary and Total Symptom domains were evaluated using Spearman correlations and 2-way ANOVA with differences between patients with HF with reduced EF and HFpEF tested with interaction terms. Predictive validity of the KCCQ Overall Summary scores was assessed with Kaplan-Meier curves for death and all-cause hospitalization. Covariate adjustment was made using Cox proportional hazards models. Internal reliability was assessed with Cronbach's α. Among 849 patients, 200 (24%) had HFpEF. KCCQ summary scores were strongly associated with New York Heart Association class in both patients with HFpEF (r=-0.62; P<0.001) and HF with reduced EF (r=-0.55; P=0.27 for interaction). One-year event-free rates by KCCQ category among patients with HFpEF were 0 to 25=13.8%, 26 to 50=59.1%, 51 to 75=73.8%, and 76 to 100=77.8% (log rank P<0.001), with no significant interaction by EF (P=0.37). The KCCQ domains demonstrated high internal consistency among patients with HFpEF (Cronbach's α=0.96 for overall summary and ≥0.69 in all subdomains).\n\n\nCONCLUSIONS\nAmong patients with HFpEF, the KCCQ seems to be a valid and reliable measure of health status and offers excellent prognostic ability. Future studies should extend and replicate our findings, including the establishment of its responsiveness to clinical change.",
"title": ""
},
{
"docid": "2603c07864b92c6723b40c83d3c216b9",
"text": "Background: A study was undertaken to record exacerbations and health resource use in patients with COPD during 6 months of treatment with tiotropium, salmeterol, or matching placebos. Methods: Patients with COPD were enrolled in two 6-month randomised, placebo controlled, double blind, double dummy studies of tiotropium 18 μg once daily via HandiHaler or salmeterol 50 μg twice daily via a metered dose inhaler. The two trials were combined for analysis of heath outcomes consisting of exacerbations, health resource use, dyspnoea (assessed by the transitional dyspnoea index, TDI), health related quality of life (assessed by St George’s Respiratory Questionnaire, SGRQ), and spirometry. Results: 1207 patients participated in the study (tiotropium 402, salmeterol 405, placebo 400). Compared with placebo, tiotropium but not salmeterol was associated with a significant delay in the time to onset of the first exacerbation. Fewer COPD exacerbations/patient year occurred in the tiotropium group (1.07) than in the placebo group (1.49, p<0.05); the salmeterol group (1.23 events/year) did not differ from placebo. The tiotropium group had 0.10 hospital admissions per patient year for COPD exacerbations compared with 0.17 for salmeterol and 0.15 for placebo (not statistically different). For all causes (respiratory and non-respiratory) tiotropium, but not salmeterol, was associated with fewer hospital admissions while both groups had fewer days in hospital than the placebo group. The number of days during which patients were unable to perform their usual daily activities was lowest in the tiotropium group (tiotropium 8.3 (0.8), salmeterol 11.1 (0.8), placebo 10.9 (0.8), p<0.05). SGRQ total score improved by 4.2 (0.7), 2.8 (0.7) and 1.5 (0.7) units during the 6 month trial for the tiotropium, salmeterol and placebo groups, respectively (p<0.01 tiotropium v placebo). Compared with placebo, TDI focal score improved in both the tiotropium group (1.1 (0.3) units, p<0.001) and the salmeterol group (0.7 (0.3) units, p<0.05). Evaluation of morning pre-dose FEV1, peak FEV1 and mean FEV1 (0–3 hours) showed that tiotropium was superior to salmeterol while both active drugs were more effective than placebo. Conclusions: Exacerbations of COPD and health resource usage were positively affected by daily treatment with tiotropium. With the exception of the number of hospital days associated with all causes, salmeterol twice daily resulted in no significant changes compared with placebo. Tiotropium also improved health related quality of life, dyspnoea, and lung function in patients with COPD.",
"title": ""
},
{
"docid": "4a9da1575b954990f98e6807deae469e",
"text": "Recently, there has been considerable debate concerning key sizes for publ i c key based cry p t o graphic methods. Included in the debate have been considerations about equivalent key sizes for diffe rent methods and considerations about the minimum re q u i red key size for diffe rent methods. In this paper we propose a method of a n a lyzing key sizes based upon the value of the data being protected and the cost of b reaking ke y s . I . I n t ro d u c t i o n A . W H Y I S K E Y S I Z E I M P O R T A N T ? In order to keep transactions based upon public key cryptography secure, one must ensure that the underlying keys are sufficiently large as to render the best possible attack infeasible. However, this really just begs the question as one is now left with the task of defining ‘infeasible’. Does this mean infeasible given access to (say) most of the Internet to do the computations? Does it mean infeasible to a large adversary with a large (but unspecified) budget to buy the hardware for an attack? Does it mean infeasible with what hardware might be obtained in practice by utilizing the Internet? Is it reasonable to assume that if utilizing the entire Internet in a key breaking effort makes a key vulnerable that such an attack might actually be conducted? If a public effort involving a substantial fraction of the Internet breaks a single key, does this mean that similar sized keys are unsafe? Does one need to be concerned about such public efforts or does one only need to be concerned about possible private, sur reptitious efforts? After all, if a public attack is known on a particular key, it is easy to change that key. We shall attempt to address these issues within this paper. number 13 Apr i l 2000 B u l l e t i n News and A dv i c e f rom RSA La bo rat o r i e s I . I n t ro d u c t i o n I I . M et ho ds o f At tac k I I I . H i s tor i ca l R es u l t s and t he R S A Ch a l le nge I V. Se cu r i t y E st i m ate s",
"title": ""
},
{
"docid": "956cf3bf67aa60391b7c96162a5013bd",
"text": "Transferring artistic styles onto everyday photographs has become an extremely popular task in both academia and industry. Recently, offline training has replaced online iterative optimization, enabling nearly real-time stylization. When those stylization networks are applied directly to high-resolution images, however, the style of localized regions often appears less similar to the desired artistic style. This is because the transfer process fails to capture small, intricate textures and maintain correct texture scales of the artworks. Here we propose a multimodal convolutional neural network that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales. Compared to state-of-the-art networks, our network can also perform style transfer in nearly real-time by performing much more sophisticated training offline. By properly handling style and texture cues at multiple scales using several modalities, we can transfer not just large-scale, obvious style cues but also subtle, exquisite ones. That is, our scheme can generate results that are visually pleasing and more similar to multiple desired artistic styles with color and texture cues at multiple scales.",
"title": ""
},
{
"docid": "3105a48f0b8e45857e8d48e26b258e04",
"text": "Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.",
"title": ""
},
{
"docid": "b39904ccd087e59794cf2cc02e5d2644",
"text": "In this paper, we propose a novel walking method for torque controlled robots. The method is able to produce a wide range of speeds without requiring off-line optimizations and re-tuning of parameters. We use a quadratic whole-body optimization method running online which generates joint torques, given desired Cartesian accelerations of center of mass and feet. Using a dynamics model of the robot inside this optimizer, we ensure both compliance and tracking, required for fast locomotion. We have designed a foot-step planner that uses a linear inverted pendulum as simplified robot internal model. This planner is formulated as a quadratic convex problem which optimizes future steps of the robot. Fast libraries help us performing these calculations online. With very few parameters to tune and no perception, our method shows notable robustness against strong external pushes, relatively large terrain variations, internal noises, model errors and also delayed communication.",
"title": ""
},
{
"docid": "d3b0a831715bd2f2de9d94811bdd47e7",
"text": "Aspect Term Extraction (ATE) identifies opinionated aspect terms in texts and is one of the tasks in the SemEval Aspect Based Sentiment Analysis (ABSA) contest. The small amount of available datasets for supervised ATE and the costly human annotation for aspect term labelling give rise to the need for unsupervised ATE. In this paper, we introduce an architecture that achieves top-ranking performance for supervised ATE. Moreover, it can be used efficiently as feature extractor and classifier for unsupervised ATE. Our second contribution is a method to automatically construct datasets for ATE. We train a classifier on our automatically labelled datasets and evaluate it on the human annotated SemEval ABSA test sets. Compared to a strong rule-based baseline, we obtain a dramatically higher F-score and attain precision values above 80%. Our unsupervised method beats the supervised ABSA baseline from SemEval, while preserving high precision scores.",
"title": ""
},
{
"docid": "eab86ab18bd47e883b184dcd85f366cd",
"text": "We study corporate bond default rates using an extensive new data set spanning the 1866–2008 period. We find that the corporate bond market has repeatedly suffered clustered default events much worse than those experienced during the Great Depression. For example, during the railroad crisis of 1873–1875, total defaults amounted to 36% of the par value of the entire corporate bond market. Using a regime-switching model, we examine the extent to which default rates can be forecast by financial and macroeconomic variables. We find that stock returns, stock return volatility, and changes in GDP are strong predictors of default rates. Surprisingly, however, credit spreads are not. Over the long term, credit spreads are roughly twice as large as default losses, resulting in an average credit risk premium of about 80 basis points. We also find that credit spreads do not adjust in response to realized default rates. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "897962874a43ee19e3f50f431d4c449e",
"text": "According to Dennett, the same system may be described using a ‘physical’ (mechanical) explanatory stance, or using an ‘intentional’ (beliefand goalbased) explanatory stance. Humans tend to find the physical stance more helpful for certain systems, such as planets orbiting a star, and the intentional stance for others, such as living animals. We define a formal counterpart of physical and intentional stances within computational theory: a description of a system as either a device, or an agent, with the key difference being that ‘devices’ are directly described in terms of an input-output mapping, while ‘agents’ are described in terms of the function they optimise. Bayes’ rule can then be applied to calculate the subjective probability of a system being a device or an agent, based only on its behaviour. We illustrate this using the trajectories of an object in a toy grid-world domain.",
"title": ""
},
{
"docid": "a1d9c897f926fa4cc45ebc6209deb6bc",
"text": "This paper addresses the relationship between the ego, id, and internal objects. While ego psychology views the ego as autonomous of the drives, a less well-known alternative position views the ego as constituted by the drives. Based on Freud's ego-instinct account, this position has developed into a school of thought which postulates that the drives act as knowers. Given that there are multiple drives, this position proposes that personality is constituted by multiple knowers. Following on from Freud, the ego is viewed as a composite sub-set of the instinctual drives (ego-drives), whereas those drives cut off from expression form the id. The nature of the \"self\" is developed in terms of identification and the possibility of multiple personalities is also established. This account is then extended to object-relations and the explanatory value of the ego-drive account is discussed in terms of the addressing the nature of ego-structures and the dynamic nature of internal objects. Finally, the impact of psychological conflict and the significance of repression for understanding the nature of splits within the psyche are also discussed.",
"title": ""
},
{
"docid": "ee4288bcddc046ae5e9bcc330264dc4f",
"text": "Emerging recognition of two fundamental errors underpinning past polices for natural resource issues heralds awareness of the need for a worldwide fundamental change in thinking and in practice of environmental management. The first error has been an implicit assumption that ecosystem responses to human use are linear, predictable and controllable. The second has been an assumption that human and natural systems can be treated independently. However, evidence that has been accumulating in diverse regions all over the world suggests that natural and social systems behave in nonlinear ways, exhibit marked thresholds in their dynamics, and that social-ecological systems act as strongly coupled, complex and evolving integrated systems. This article is a summary of a report prepared on behalf of the Environmental Advisory Council to the Swedish Government, as input to the process of the World Summit on Sustainable Development (WSSD) in Johannesburg, South Africa in 26 August 4 September 2002. We use the concept of resilience--the capacity to buffer change, learn and develop--as a framework for understanding how to sustain and enhance adaptive capacity in a complex world of rapid transformations. Two useful tools for resilience-building in social-ecological systems are structured scenarios and active adaptive management. These tools require and facilitate a social context with flexible and open institutions and multi-level governance systems that allow for learning and increase adaptive capacity without foreclosing future development options.",
"title": ""
},
{
"docid": "e1485bddbab0c3fa952d045697ff2112",
"text": "The diversity of an ensemble of classifiers is known to be an important factor in determining its generalization error. We present a new method for generating ensembles, Decorate (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples), that directly constructs diverse hypotheses using additional artificially-constructed training examples. The technique is a simple, general meta-learner that can use any strong learner as a base classifier to build diverse committees. Experimental results using decision-tree induction as a base learner demonstrate that this approach consistently achieves higher predictive accuracy than the base classifier, Bagging and Random Forests. Decorate also obtains higher accuracy than Boosting on small training sets, and achieves comparable performance on larger training sets.",
"title": ""
},
{
"docid": "b84c233a32dfe8fd004ad33a6565df9c",
"text": "Graph databases with a custom non-relational backend promote themselves to outperform relational databases in answering queries on large graphs. Recent empirical studies show that this claim is not always true. However, these studies focus only on pattern matching queries and neglect analytical queries used in practice such as shortest path, diameter, degree centrality or closeness centrality. In addition, there is no distinction between different types of pattern matching queries. In this paper, we introduce a set of analytical and pattern matching queries, and evaluate them in Neo4j and a market-leading commercial relational database system. We show that the relational database system outperforms Neo4j for our analytical queries and that Neo4j is faster for queries that do not filter on specific edge types.",
"title": ""
},
{
"docid": "4bf78f78c76f65bbdc856e1290311cd1",
"text": "The capacity to rectify DNA double-strand breaks (DSBs) is crucial for the survival of all species. DSBs can be repaired either by homologous recombination (HR) or non-homologous end joining (NHEJ). The long-standing notion that bacteria rely solely on HR for DSB repair has been overturned by evidence that mycobacteria and other genera have an NHEJ system that depends on a dedicated DNA ligase, LigD, and the DNA-end-binding protein Ku. Recent studies have illuminated the role of NHEJ in protecting the bacterial chromosome against DSBs and other clastogenic stresses. There is also emerging evidence of functional crosstalk between bacterial NHEJ proteins and components of other DNA-repair pathways. Although still a young field, bacterial NHEJ promises to teach us a great deal about the nexus of DNA repair and bacterial pathogenesis.",
"title": ""
},
{
"docid": "340f64ed182a54ef617d7aa2ffeac138",
"text": "Compared with animals, plants generally possess a high degree of developmental plasticity and display various types of tissue or organ regeneration. This regenerative capacity can be enhanced by exogenously supplied plant hormones in vitro, wherein the balance between auxin and cytokinin determines the developmental fate of regenerating organs. Accumulating evidence suggests that some forms of plant regeneration involve reprogramming of differentiated somatic cells, whereas others are induced through the activation of relatively undifferentiated cells in somatic tissues. We summarize the current understanding of how plants control various types of regeneration and discuss how developmental and environmental constraints influence these regulatory mechanisms.",
"title": ""
},
{
"docid": "3fe2f080342154fca61b3c1bb4ee8aba",
"text": "In this paper, we apply imitation learning to develop drivers for The Open Racing Car Simulator (TORCS). Our approach can be classified as a direct method in that it applies supervised learning to learn car racing behaviors from the data collected from other drivers. In the literature, this approach is known to have led to extremely poor performance with drivers capable of completing only very small parts of a track. In this paper we show that, by using high-level information about the track ahead of the car and by predicting high-level actions, it is possible to develop drivers with performances that in some cases are only 15% lower than the performance of the fastest driver available in TORCS. Our experimental results suggest that our approach can be effective in developing drivers with good performance in non-trivial tracks using a very limited amount of data and computational resources. We analyze the driving behavior of the controllers developed using our approach and identify perceptual aliasing as one of the factors which can limit performance of our approach.",
"title": ""
}
] | scidocsrr |
51baa8f8d538dcfe131ffe1cad8a7cfe | Research on Combining Scrum with CMMI in Small and Medium Organizations | [
{
"docid": "0cf1f63fd39c8c74465fad866958dac6",
"text": "Software development organizations that have been employing capability maturity models, such as SW-CMM or CMMI for improving their processes are now increasingly interested in the possibility of adopting agile development methods. In the context of project management, what can we say about Scrum’s alignment with CMMI? The aim of our paper is to present the mapping between CMMI and the agile method Scrum, showing major gaps between them and identifying how organizations are adopting complementary practices in their projects to make these two approaches more compliant. This is useful for organizations that have a plan-driven process based on the CMMI model and are planning to improve the agility of processes or to help organizations to define a new project management framework based on both CMMI and Scrum practices.",
"title": ""
}
] | [
{
"docid": "76e8496e4ce5ce940673e01ff04f088d",
"text": "A fundamental fact about polynomial interpolation is that k evaluations of a degree-(k-1) polynomial f are sufficient to determine f. This is also necessary in a strong sense: given k-1 evaluations, we learn nothing about the value of f on any k'th point. In this paper, we study a variant of the polynomial interpolation problem. Instead of querying entire evaluations of f (which are elements of a large field F), we are allowed to query partial evaluations; that is, each evaluation delivers a few elements from a small subfield of F, rather than a single element from F. We show that in this model, one can do significantly better than in the traditional setting, in terms of the amount of information required to determine the missing evaluation. More precisely, we show that only O(k) bits are necessary to recover a missing evaluation. In contrast, the traditional method of looking at k evaluations requires Omega(k log(k)) bits. We also show that our result is optimal for linear methods, even up to the leading constants. Our motivation comes from the use of Reed-Solomon (RS) codes for distributed storage systems, in particular for the exact repair problem. The traditional use of RS codes in this setting is analogous to the traditional interpolation problem. Each node in a system stores an evaluation of f, and if one node fails we can recover it by reading k other nodes. However, each node is free to send less information, leading to the modified problem above. The quickly-developing field of regenerating codes has yielded several codes which take advantage of this freedom. However, these codes are not RS codes, and RS codes are still often used in practice; in 2011, Dimakis et al. asked how well RS codes could perform in this setting. Our results imply that RS codes can also take advantage of this freedom to download partial symbols. In some parameter regimes---those with small levels of sub-packetization---our scheme for RS codes outperforms all known regenerating codes. Even with a high degree of sub-packetization, our methods give non-trivial schemes, and we give an improved repair scheme for a specific (14,10)-RS code used in the Facebook Hadoop Analytics cluster.",
"title": ""
},
{
"docid": "f0c415dfb22032064e8cdb0ec76403b7",
"text": "In this paper, an impedance control scheme for aerial robotic manipulators is proposed, with the aim of reducing the end-effector interaction forces with the environment. The proposed control has a multi-level architecture, in detail the outer loop is composed by a trajectory generator and an impedance filter that modifies the trajectory to achieve a complaint behaviour in the end-effector space; a middle loop is used to generate the joint space variables through an inverse kinematic algorithm; finally the inner loop is aimed at ensuring the motion tracking. The proposed control architecture has been experimentally tested.",
"title": ""
},
{
"docid": "cc70efd881626a16ab23b9305e67adce",
"text": "Many different sciences have developed many different tests to describe and characterise spatial point data. For example, all the trees in a given area may be mapped such that their x, y co-ordinates and other variables, or ‘marks’, (e.g. species, size) might be recorded. Statistical techniques can be used to explore interactions between events at different length scales and interactions between different types of events in the same area. SpPack is a menu-driven add-in for Excel written in Visual Basic for Applications (VBA) that provides a range of statistical analyses for spatial point data. These include simple nearest-neighbour-derived tests and more sophisticated second-order statistics such as Ripley’s K-function and the neighbourhood density function (NDF). Some simple grid or quadrat-based statistics are also calculated. The application of the SpPack add-in is demonstrated for artificially generated event sets with known properties and for a multi-type ecological event set. 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "72196b0a2eed5e9747d90593cdd0684d",
"text": "Advanced silicon (Si) node technology development is moving to 10/7nm technology and pursuing die size reduction, efficiency enhancement and lower power consumption for mobile applications in the semiconductor industry. The flip chip chip scale package (fcCSP) has been viewed as an attractive solution to achieve the miniaturization of die size, finer bump pitch, finer line width and spacing (LW/LS) substrate requirements, and is widely adopted in mobile devices to satisfy the increasing demands of higher performance, higher bandwidth, and lower power consumption as well as multiple functions. The utilization of mass reflow (MR) chip attach process in a fcCSP with copper (Cu) pillar bumps, embedded trace substrate (ETS) technology and molded underfill (MUF) is usually viewed as the cost-efficient solution. However, when finer bump pitch and LW/LS with an escaped trace are designed in flip chip MR process, a higher risk of a bump to trace short can occur. In order to reduce the risk of bump to trace short as well as extremely low-k (ELK) damage in a fcCSP with advanced Si node, the thermo-compression bonding (TCB) and TCB with non-conductive paste (TCNCP) have been adopted, although both methodologies will cause a higher assembly cost due to the lower units per hour (UPH) assembly process. For the purpose of delivering a cost-effective chip attach process as compared to TCB/TCNCP methodologies as well as reducing the risk of bump to trace as compared to the MR process, laser assisted bonding (LAB) chip attach methodology was studied in a 15x15mm fcCSP with 10nm backend process daisy-chain die for this paper. Using LAB chip attach technology can increase the UPH by more than 2-times over TCB and increase the UPH 5-times compared to TCNCP. To realize the ELK performance of a 10nm fcCSP with fine bump pitch of $60 \\mu \\mathrm{m}$ and $90 \\mu \\mathrm{m}$ as well as 2-layer ETS with two escaped traces design, the quick temperature cycling (QTC) test was performed after the LAB chip attach process. The comparison of polyimide (PI) layer Cu pillar bumps to non-PI Cu pillar bumps (without a PI layer) will be discussed to estimate the 10nm ELK performance. The evaluated result shows that the utilization of LAB can not only achieve a bump pitch reduction with a finer LW/LS substrate with escaped traces in the design, but it also validates ELK performance and Si node reduction. Therefore, the illustrated LAB chip attach processes examined here can guarantee the assembly yield with less ELK damage risk in a 10nm fcCSP with finer bump pitch and substrate finer LW/LS design in the future.",
"title": ""
},
{
"docid": "154528ab93e89abe965b6abd93af6a13",
"text": "We investigate the geometry of that function in the plane or 3-space, which associates to each point the square of the shortest distance to a given curve or surface. Particular emphasis is put on second order Taylor approximants and other local quadratic approximants. Their key role in a variety of geometric optimization algorithms is illustrated at hand of registration in Computer Vision and surface approximation.",
"title": ""
},
{
"docid": "e2ea233e4baaf3c76337c779060531cf",
"text": "OBJECTIVES\nAnticoagulant and antiplatelet medications are known to increase the risk and severity of traumatic intracranial hemorrhage (tICH), even with minor head trauma. Most studies on bleeding propensity with head trauma are retrospective, are based on trauma registries, or include heterogeneous mechanisms of injury. The goal of this study was to determine the rate of tICH from only a common low-acuity mechanism of injury, that of a ground-level fall, in patients taking one or more of the following antiplatelet or anticoagulant medications: aspirin, warfarin, prasugrel, ticagrelor, dabigatran, rivaroxaban, apixaban, or enoxaparin.\n\n\nMETHODS\nThis was a prospective cohort study conducted at a Level I tertiary care trauma center of consecutive patients meeting the inclusion criteria of a ground-level fall with head trauma as affirmed by the treating clinician, a computed tomography (CT) head obtained, and taking and one of the above antiplatelet or anticoagulants. Patients were identified prospectively through electronic screening with confirmatory chart review. Emergency department charts were abstracted without subsequent knowledge of the hospital course. Patients transferred with a known abnormal CT head were excluded. Primary outcome was rate of tICH on initial CT head. Rates with 95% confidence intervals (CIs) were compared.\n\n\nRESULTS\nOver 30 months, we enrolled 939 subjects. The mean ± SD age was 78.3 ± 11.9 years and 44.6% were male. There were a total of 33 patients with tICH (3.5%, 95% CI = 2.5%-4.9%). Antiplatelets had a rate of tICH of 4.3% (95% CI = 3.0%-6.2%) compared to anticoagulants with a rate of 1.7% (95% CI = 0.4%-4.5%). Aspirin without other agents had an tICH rate of 4.6% (95% CI = 3.2%-6.6%); of these, 81.5% were taking low-dose 81 mg aspirin. Two patients received a craniotomy (one taking aspirin, one taking warfarin). There were four deaths (three taking aspirin, one taking warfarin). Most (72.7%) subjects with tICH were discharged home or to a rehabilitation facility. There were no tICH in 31 subjects taking a direct oral anticoagulant. CIs were overlapping for the groups.\n\n\nCONCLUSION\nThere is a low incidence of clinically significant tICH with a ground-level fall in head trauma in patients taking an anticoagulant or antiplatelet medication. There was no statistical difference in rate of tICH between antiplatelet and anticoagulants, which is unanticipated and counterintuitive as most literature and teaching suggests a higher rate with anticoagulants. A larger data set is needed to determine if small differences between the groups exist.",
"title": ""
},
{
"docid": "aa4887f5671a23580a5c48b8f0508f74",
"text": "Thrombocytopenia–absent radius syndrome is a rare autosomal recessive disorder characterized by megakaryocytic thrombocytopenia and longitudinal limb deficiencies mostly affecting the radial ray. Most patients are compound heterozygotes for a 200 kb interstitial microdeletion in 1q21.1 and a hypomorphic allele in RBM8A, mapping in the deleted segment. At the moment, the complete molecular characterization of thrombocytopenia–absent radius syndrome is limited to a handful of patients mostly ascertained in the pediatric age We report on a fetus with bilateral upper limb deficiency found at standard prenatal ultrasound examination. The fetus had bilateral radial agenesis and humeral hypo/aplasia with intact thumbs, micrognathia and urinary anomalies, indicating thrombocytopenia–absent radius syndrome. Molecular studies demonstrated compound heterozygosity for the 1q21.1 microdeletion and the RBM8A rs139428292 variant at the hemizygous state, inherited from the mother and father, respectively The molecular information allowed prenatal diagnosis in the following pregnancy resulting in the birth of a healthy carrier female. A review was carried out with the attempt to the trace the fetal ultrasound presentation of thrombocytopenia–absent radius syndrome and discussing opportunities for second-tier molecular studies within a multidisciplinary setting.",
"title": ""
},
{
"docid": "7fb075251c846b7521abaa32a82b9918",
"text": "Keystroke dynamics-the analysis of typing rhythms to discriminate among users-has been proposed for detecting impostors (i.e., both insiders and external attackers). Since many anomaly-detection algorithms have been proposed for this task, it is natural to ask which are the top performers (e.g., to identify promising research directions). Unfortunately, we cannot conduct a sound comparison of detectors using the results in the literature because evaluation conditions are inconsistent across studies. Our objective is to collect a keystroke-dynamics data set, to develop a repeatable evaluation procedure, and to measure the performance of a range of detectors so that the results can be compared soundly. We collected data from 51 subjects typing 400 passwords each, and we implemented and evaluated 14 detectors from the keystroke-dynamics and pattern-recognition literature. The three top-performing detectors achieve equal-error rates between 9.6% and 10.2%. The results-along with the shared data and evaluation methodology-constitute a benchmark for comparing detectors and measuring progress.",
"title": ""
},
{
"docid": "b10b42c8fbe13ad8d1d04aec9df12a00",
"text": "As an alternative strategy to antibiotic use in aquatic disease management, probiotics have recently attracted extensive attention in aquaculture. However, the use of terrestrial bacterial species as probiotics for aquaculture has had limited success, as bacterial strain characteristics are dependent upon the environment in which they thrive. Therefore, isolating potential probiotic bacteria from the marine environment in which they grow optimally is a better approach. Bacteria that have been used successfully as probiotics belong to the genus Vibrio and Bacillus, and the species Thalassobacter utilis. Most researchers have isolated these probiotic strains from shrimp culture water, or from the intestine of different penaeid species. The use of probiotic bacteria, based on the principle of competitive exclusion, and the use of immunostimulants are two of the most promising preventive methods developed in the fight against diseases during the last few years. It also noticed that probiotic bacteria could produce some digestive enzymes, which might improve the digestion of shrimp, thus enhancing the ability of stress resistance and health of the shrimp. However, the probiotics in aquatic environment remain to be a controversial concept, as there was no authentic evidence / real environment demonstrations on the successful use of probiotics and their mechanisms of action in vivo. The present review highlights the potential sources of probiotics, mechanism of action, diversity of probiotic microbes and challenges of probiotic usage in shrimp aquaculture.",
"title": ""
},
{
"docid": "713d446993c0f7937eaa60ef0f8a34b8",
"text": "Abiotic stress induces several changes in plants at physiological and molecular level. Plants have evolved regulatory mechanisms guided towards establishment of stress tolerance in which epigenetic modifications play a pivotal role. We provide examples of gene expression changes that are brought about by conversion of active chromatin to silent heterochromatin and vice versa. Methylation of CG sites and specific modification of histone tail determine whether a particular locus is transcriptionally active or silent. We present a lucid review of epigenetic machinery and epigenetic alterations involving DNA methylation, histone tail modifications, chromatin remodeling, and RNA directed epigenetic changes.",
"title": ""
},
{
"docid": "99c088268633c19a8c4789c58c4c9aca",
"text": "Executing agile quadrotor maneuvers with cablesuspended payloads is a challenging problem and complications induced by the dynamics typically require trajectory optimization. State-of-the-art approaches often need significant computation time and complex parameter tuning. We present a novel dynamical model and a fast trajectory optimization algorithm for quadrotors with a cable-suspended payload. Our first contribution is a new formulation of the suspended payload behavior, modeled as a link attached to the quadrotor with a combination of two revolute joints and a prismatic joint, all being passive. Differently from state of the art, we do not require the use of hybrid modes depending on the cable tension. Our second contribution is a fast trajectory optimization technique for the aforementioned system. Our model enables us to pose the trajectory optimization problem as a Mathematical Program with Complementarity Constraints (MPCC). Desired behaviors of the system (e.g., obstacle avoidance) can easily be formulated within this framework. We show that our approach outperforms the state of the art in terms of computation speed and guarantees feasibility of the trajectory with respect to both the system dynamics and control input saturation, while utilizing far fewer tuning parameters. We experimentally validate our approach on a real quadrotor showing that our method generalizes to a variety of tasks, such as flying through desired waypoints while avoiding obstacles, or throwing the payload toward a desired target. To the best of our knowledge, this is the first time that three-dimensional, agile maneuvers exploiting the system dynamics have been achieved on quadrotors with a cable-suspended payload. SUPPLEMENTARY MATERIAL This paper is accompanied by a video showcasing the experiments: https://youtu.be/s9zb5MRXiHA",
"title": ""
},
{
"docid": "7a398cae0109297a19195691505b8caf",
"text": "There is a growing interest in models that can learn from unlabelled speech paired with visual context. This setting is relevant for low-resource speech processing, robotics, and human language acquisition research. Here, we study how a visually grounded speech model, trained on images of scenes paired with spoken captions, captures aspects of semantics. We use an external image tagger to generate soft text labels from images, which serve as targets for a neural model that maps untranscribed speech to semantic keyword labels. We introduce a newly collected data set of human semantic relevance judgements and an associated task, semantic speech retrieval, where the goal is to search for spoken utterances that are semantically relevant to a given text query. Without seeing any text, the model trained on parallel speech and images achieves a precision of almost 60% on its top ten semantic retrievals. Compared to a supervised model trained on transcriptions, our model matches human judgements better by some measures, especially in retrieving non-verbatim semantic matches. We perform an extensive analysis of the model and its resulting representations.",
"title": ""
},
{
"docid": "31c2dc8045f43c7bf1aa045e0eb3b9ad",
"text": "This paper addresses the task of functional annotation of genes from biomedical literature. We view this task as a hierarchical text categorization problem with Gene Ontology as a class hierarchy. We present a novel global hierarchical learning approach that takes into account the semantics of a class hierarchy. This algorithm with AdaBoost as the underlying learning procedure significantly outperforms the corresponding “flat” approach, i.e. the approach that does not consider any hierarchical information. In addition, we propose a novel hierarchical evaluation measure that gives credit to partially correct classification and discriminates errors by both distance and depth in a class hierarchy.",
"title": ""
},
{
"docid": "5016ab74ebd9c1359e8dec80ee220bcf",
"text": "The possibility of communication between plants was proposed nearly 20 years ago, although previous demonstrations have suffered from methodological problems and have not been widely accepted. Here we report the first rigorous, experimental evidence demonstrating that undamaged plants respond to cues released by neighbors to induce higher levels of resistance against herbivores in nature. Sagebrush plants that were clipped in the field released a pulse of an epimer of methyl jasmonate that has been shown to be a volatile signal capable of inducing resistance in wild tobacco. Wild tobacco plants with clipped sagebrush neighbors had increased levels of the putative defensive oxidative enzyme, polyphenol oxidase, relative to control tobacco plants with unclipped sagebrush neighbors. Tobacco plants near clipped sagebrush experienced greatly reduced levels of leaf damage by grasshoppers and cutworms during three field seasons compared to unclipped controls. This result was not caused by an altered light regime experienced by tobacco near clipped neighbors. Barriers to soil contact between tobacco and sagebrush did not reduce the difference in leaf damage although barriers that blocked air contact negated the effect.",
"title": ""
},
{
"docid": "3207403c7f748bd7935469a74aa1c38f",
"text": "This article briefly reviews the rise of Critical Discourse Analysis and teases out a detailed analysis of the various critiques that have been levelled at CDA and its practitioners over the last twenty years, both by scholars working within the “critical” paradigm and by other critics. A range of criticisms are discussed which target the underlying premises, the analytical methodology and the disputed areas of reader response and the integration of contextual factors. Controversial issues such as the predominantly negative focus of much CDA scholarship, and the status of CDA as an emergent “intellectual orthodoxy”, are also reviewed. The conclusions offer a summary of the principal criticisms that emerge from this overview, and suggest some ways in which these problems could be attenuated.",
"title": ""
},
{
"docid": "d952d54231f1093129fe23f051fc858d",
"text": "As part of the Face Recognition Technology (FERET) program, the U.S. Army Research Laboratory (ARL) conducted supervised government tests and evaluations of automatic face recognition algorithms. The goal of the tests was to provide an independent method of evaluating algorithms and assessing the state of the art in automatic face recognition. This report describes the design and presents the results of the August 1994 and March 1995 FERET tests. Results for FERET tests administered by ARL between August 1994 and August 1996 are reported.",
"title": ""
},
{
"docid": "3ce021aa52dac518e1437d397c63bf68",
"text": "Malaria is a common and sometimes fatal disease caused by infection with Plasmodium parasites. Cerebral malaria (CM) is a most severe complication of infection with Plasmodium falciparum parasites which features a complex immunopathology that includes a prominent neuroinflammation. The experimental mouse model of cerebral malaria (ECM) induced by infection with Plasmodium berghei ANKA has been used abundantly to study the role of single genes, proteins and pathways in the pathogenesis of CM, including a possible contribution to neuroinflammation. In this review, we discuss the Plasmodium berghei ANKA infection model to study human CM, and we provide a summary of all host genetic effects (mapped loci, single genes) whose role in CM pathogenesis has been assessed in this model. Taken together, the reviewed studies document the many aspects of the immune system that are required for pathological inflammation in ECM, but also identify novel avenues for potential therapeutic intervention in CM and in diseases which feature neuroinflammation.",
"title": ""
},
{
"docid": "314722d112f5520f601ed6917f519466",
"text": "In this work we propose an online multi person pose tracking approach which works on two consecutive frames It−1 and It . The general formulation of our temporal network allows to rely on any multi person pose estimation approach as spatial network. From the spatial network we extract image features and pose features for both frames. These features serve as input for our temporal model that predicts Temporal Flow Fields (TFF). These TFF are vector fields which indicate the direction in which each body joint is going to move from frame It−1 to frame It . This novel representation allows to formulate a similarity measure of detected joints. These similarities are used as binary potentials in a bipartite graph optimization problem in order to perform tracking of multiple poses. We show that these TFF can be learned by a relative small CNN network whilst achieving state-of-the-art multi person pose tracking results.",
"title": ""
},
{
"docid": "947fdb3233e57b5df8ce92df31f2a0be",
"text": "Recent work by Cohen et al. [1] has achieved state-of-the-art results for learning spherical images in a rotation invariant way by using ideas from group representation theory and noncommutative harmonic analysis. In this paper we propose a generalization of this work that generally exhibits improved performace, but from an implementation point of view is actually simpler. An unusual feature of the proposed architecture is that it uses the Clebsch–Gordan transform as its only source of nonlinearity, thus avoiding repeated forward and backward Fourier transforms. The underlying ideas of the paper generalize to constructing neural networks that are invariant to the action of other compact groups.",
"title": ""
},
{
"docid": "25238b85534ee95d70e581145fa28c07",
"text": "Advances in sequencing and high-throughput techniques have provided an unprecedented opportunity to interrogate human diseases on a genome-wide scale. The list of disease-causing mutations is expanding rapidly, and mutations affecting mRNA translation are no exception. Translation (protein synthesis) is one of the most complex processes in the cell. The orchestrated action of ribosomes, tRNAs and numerous translation factors decodes the information contained in mRNA into a polypeptide chain. The intricate nature of this process renders it susceptible to deregulation at multiple levels. In this Review, we summarize current evidence of translation deregulation in human diseases other than cancer. We discuss translation-related diseases on the basis of the molecular aberration that underpins their pathogenesis (including tRNA dysfunction, ribosomopathies, deregulation of the integrated stress response and deregulation of the mTOR pathway) and describe how deregulation of translation generates the phenotypic variability observed in these disorders. Translation deregulation causes many human diseases, which can be broadly categorized into tRNA or ribosomal dysfunction, and deregulation of the integrated stress response or the mTOR pathway. The complexity of the translation process and its cellular contexts could explain the phenotypic variability of these disorders.",
"title": ""
}
] | scidocsrr |
bd3fba0b990bdb15fac6dc7062496162 | Visual SLAM with Line and Corner Features | [
{
"docid": "a88b2916f73dedabceda574f10a93672",
"text": "A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sonar sensors or artificial landmarks. In this paper, we describe a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building. With our Triclops stereo vision system, these landmarks are localized and robot ego-motion is estimated by least-squares minimization of the matched landmarks. Feature viewpoint variation and occlusion are taken into account by maintaining a view direction for each landmark. Experiments show that these visual landmarks are robustly matched, robot pose is estimated and a consistent three-dimensional map is built. As image features are not noise-free, we carry out error analysis for the landmark positions and the robot pose. We use Kalman filters to track these landmarks in a dynamic environment, resulting in a database map with landmark positional uncertainty. KEY WORDS—localization, mapping, visual landmarks, mobile robot",
"title": ""
},
{
"docid": "a53065d1cfb1fe898182d540d65d394b",
"text": "This paper presents a novel approach for detecting affine invariant interest points. Our method can deal with significant affine transformations including large scale changes. Such transformations introduce significant changes in the point location as well as in the scale and the shape of the neighbourhood of an interest point. Our approach allows to solve for these problems simultaneously. It is based on three key ideas : 1) The second moment matrix computed in a point can be used to normalize a region in an affine invariant way (skew and stretch). 2) The scale of the local structure is indicated by local extrema of normalized derivatives over scale. 3) An affine-adapted Harris detector determines the location of interest points. A multi-scale version of this detector is used for initialization. An iterative algorithm then modifies location, scale and neighbourhood of each point and converges to affine invariant points. For matching and recognition, the image is characterized by a set of affine invariant points ; the affine transformation associated with each point allows the computation of an affine invariant descriptor which is also invariant to affine illumination changes. A quantitative comparison of our detector with existing ones shows a significant improvement in the presence of large affine deformations. Experimental results for wide baseline matching show an excellent performance in the presence of large perspective transformations including significant scale changes. Results for recognition are very good for a database with more than 5000 images.",
"title": ""
}
] | [
{
"docid": "71da47c6837022a80dccabb0a1f5c00e",
"text": "The treatment of obesity and cardiovascular diseases is one of the most difficult and important challenges nowadays. Weight loss is frequently offered as a therapy and is aimed at improving some of the components of the metabolic syndrome. Among various diets, ketogenic diets, which are very low in carbohydrates and usually high in fats and/or proteins, have gained in popularity. Results regarding the impact of such diets on cardiovascular risk factors are controversial, both in animals and humans, but some improvements notably in obesity and type 2 diabetes have been described. Unfortunately, these effects seem to be limited in time. Moreover, these diets are not totally safe and can be associated with some adverse events. Notably, in rodents, development of nonalcoholic fatty liver disease (NAFLD) and insulin resistance have been described. The aim of this review is to discuss the role of ketogenic diets on different cardiovascular risk factors in both animals and humans based on available evidence.",
"title": ""
},
{
"docid": "af2a1083436450b9147eb7b51be5c761",
"text": "Over the past century, various value models have been proposed. To determine which value model best predicts prosocial behavior, mental health, and pro-environmental behavior, we subjected seven value models to a hierarchical regression analysis. A sample of University students (N = 271) completed the Portrait Value Questionnaire (Schwartz et al., 2012), the Basic Value Survey (Gouveia et al., 2008), and the Social Value Orientation scale (Van Lange et al., 1997). Additionally, they completed the Values Survey Module (Hofstede and Minkov, 2013), Inglehart's (1977) materialism-postmaterialism items, the Study of Values, fourth edition (Allport et al., 1960; Kopelman et al., 2003), and the Rokeach (1973) Value Survey. However, because the reliability of the latter measures was low, only the PVQ-RR, the BVS, and the SVO where entered into our analysis. Our results provide empirical evidence that the PVQ-RR is the strongest predictor of all three outcome variables, explaining variance above and beyond the other two instruments in almost all cases. The BVS significantly predicted prosocial and pro-environmental behavior, while the SVO only explained variance in pro-environmental behavior.",
"title": ""
},
{
"docid": "3c103640a41779e8069219b9c4849ba7",
"text": "Electronic banking is becoming more popular every day. Financial institutions have accepted the transformation to provide electronic banking facilities to their customers in order to remain relevant and thrive in an environment that is competitive. A contributing factor to the customer retention rate is the frequent use of multiple online functionality however despite all the benefits of electronic banking, some are still hesitant to use it because of security concerns. The perception is that gender, age, education level, salary, culture and profession all have an impact on electronic banking usage. This study reports on how the Knowledge Discovery and Data Mining (KDDM) process was used to determine characteristics and electronic banking behavior of high net worth individuals at a South African bank. Findings JIBC December 2017, Vol. 22, No.3 2 indicate that product range and age had the biggest impact on electronic banking behavior. The value of user segmentation is that the financial institution can provide a more accurate service to their users based on their preferences and online banking behavior.",
"title": ""
},
{
"docid": "cceb05e100fe8c9f9dab9f6525d435db",
"text": "Conventional feedback control methods can solve various types of robot control problems very efficiently by capturing the structure with explicit models, such as rigid body equations of motion. However, many control problems in modern manufacturing deal with contacts and friction, which are difficult to capture with first-order physical modeling. Hence, applying control design methodologies to these kinds of problems often results in brittle and inaccurate controllers, which have to be manually tuned for deployment. Reinforcement learning (RL) methods have been demonstrated to be capable of learning continuous robot controllers from interactions with the environment, even for problems that include friction and contacts. In this paper, we study how we can solve difficult control problems in the real world by decomposing them into a part that is solved efficiently by conventional feedback control methods, and the residual which is solved with RL. The final control policy is a superposition of both control signals. We demonstrate our approach by training an agent to successfully perform a real-world block assembly task involving contacts and unstable objects.",
"title": ""
},
{
"docid": "82aab8fe60da7c23eef945d7a1ec00fe",
"text": "A novel broadband dual-polarized crossed-dipole antenna element with parasitic branches is designed for 2G/3G/LTE base stations. The proposed antenna mainly comprises a curved reflector, two crossed-dipoles, a pair of feeding strips, and two couples of balanced-unbalanced (BALUN) transformers. Compared to the traditional square-loop radiator dipole, the impedance bandwidth of the proposed antenna can be greatly improved after employing two parasitic metal stubs and two pairs of parasitic metal branches, and a better radiation performance of the proposed antenna can be obtained by optimizing the angle of the reflector. Simulation results show that the proposed antenna element can operate from 1.7 to 2.7 GHz with has an impedance bandwidth of VSWR <; 1.5, the port isolation of more than 30 dB, a stable radiation pattern with half-power beamwidth 65.2°-5.6° at H-plane and V-plane, and a relatively stable dipole antenna gain of 8.5 ± 0.4 (dBi). Furthermore, measured results have a good agreement with simulated ones.",
"title": ""
},
{
"docid": "eb781b72664c6ed36c5aa87e8f456bd4",
"text": "We suggest that planning for automated earthmoving operations such as digging a foundation or leveling a mound of soil, be treated at multiple levels. In a system that we have developed, a coarse-level planner is used to tessellate the volume to be excavated into smaller pieces that are sequenced in order to complete the task efficiently. Each of the smaller volumes is treated with a refined planner that selects digging actions based on constraint optimization over the space of prototypical digging actions. We discuss planners and the associated representations for two types of earthmoving machines: an excavator backhoe and a wheel loader. Experimental results from a full-scale automated excavator and simulated wheel loader are presented .",
"title": ""
},
{
"docid": "c527d891bb7baeabad43cba148a0fcf9",
"text": "As a framework for extractive summarization, sentence regression has achieved state-of-the-art performance in several widely-used practical systems. The most challenging task within the sentence regression framework is to identify discriminative features to encode a sentence into a feature vector. So far, sentence regression approaches have neglected to use features that capture contextual relations among sentences.\n We propose a neural network model, Contextual Relation-based Summarization (CRSum), to take advantage of contextual relations among sentences so as to improve the performance of sentence regression. Specifically, we first use sentence relations with a word-level attentive pooling convolutional neural network to construct sentence representations. Then, we use contextual relations with a sentence-level attentive pooling recurrent neural network to construct context representations. Finally, CRSum automatically learns useful contextual features by jointly learning representations of sentences and similarity scores between a sentence and sentences in its context. Using a two-level attention mechanism, CRSum is able to pay attention to important content, i.e., words and sentences, in the surrounding context of a given sentence.\n We carry out extensive experiments on six benchmark datasets. CRSum alone can achieve comparable performance with state-of-the-art approaches; when combined with a few basic surface features, it significantly outperforms the state-of-the-art in terms of multiple ROUGE metrics.",
"title": ""
},
{
"docid": "6c4a7a6d21c85f3f2f392fbb1621cc51",
"text": "The International Academy of Education (IAE) is a not-for-profit scientific association that promotes educational research, and its dissemination and implementation. Founded in 1986, the Academy is dedicated to strengthening the contributions of research, solving critical educational problems throughout the world, and providing better communication among policy makers, researchers, and practitioners. The general aim of the IAE is to foster scholarly excellence in all fields of education. Towards this end, the Academy provides timely syntheses of research-based evidence of international importance. The Academy also provides critiques of research and of its evidentiary basis and its application to policy. This booklet about teacher professional learning and development has been prepared for inclusion in the Educational Practices Series developed by the International Academy of Education and distributed by the International Bureau of Education and the Academy. As part of its mission, the Academy provides timely syntheses of research on educational topics of international importance. This is the eighteenth in a series of booklets on educational practices that generally improve learning. This particular booklet is based on a synthesis of research evidence produced for the New Zealand Ministry of Education's Iterative Best Evidence Synthesis (BES) Programme, which is designed to be a catalyst for systemic improvement and sustainable development in education. This synthesis, and others in the series, are available electronically at www.educationcounts.govt.nz/themes/BES. All BESs are written using a collaborative approach that involves the writers, teacher unions, principal groups, teacher educators, academics, researchers, policy advisers, and other interested parties. To ensure its rigour and usefulness, each BES follows national guidelines developed by the Ministry of Education. Professor Helen Timperley was lead writer for the Teacher Professional Learning and Development: Best Evidence Synthesis Iteration [BES], assisted by teacher educators Aaron Wilson and Heather Barrar and research assistant Irene Fung, all of the University of Auckland. The BES is an analysis of 97 studies of professional development that led to improved outcomes for the students of the participating teachers. Most of these studies came from the United States, New Zealand, the Netherlands, the United Kingdom, Canada, and Israel. Dr Lorna Earl provided formative quality assurance for the synthesis; Professor John Hattie and Dr Gavin Brown oversaw the analysis of effect sizes. Helen Timperley is Professor of Education at the University of Auckland. The primary focus of her research is promotion of professional and organizational learning in schools for the purpose of improving student learning. She has …",
"title": ""
},
{
"docid": "c2df8cc7775bd4ec2bfdf4498d136c9f",
"text": "Particle Swarm Optimization is a popular heuristic search algorithm which is inspired by the social learning of birds or fishes. It is a swarm intelligence technique for optimization developed by Eberhart and Kennedy [1] in 1995. Inertia weight is an important parameter in PSO, which significantly affects the convergence and exploration-exploitation trade-off in PSO process. Since inception of Inertia Weight in PSO, a large number of variations of Inertia Weight strategy have been proposed. In order to propose one or more than one Inertia Weight strategies which are efficient than others, this paper studies 15 relatively recent and popular Inertia Weight strategies and compares their performance on 05 optimization test problems.",
"title": ""
},
{
"docid": "dfb5a6dbd1b8788cda6cb41ba741006d",
"text": "The notion of ‘user satisfaction’ plays a prominent role in HCI, yet it remains evasive. This exploratory study reports three experiments from an ongoing research program. In this program we aim to uncover (1) what user satisfaction is, (2) whether it is primarily determined by user expectations or by the interactive experience, (3) how user satisfaction may be related to perceived usability, and (4) the extent to which satisfaction rating scales capture the same interface qualities as uncovered in self-reports of interactive experiences. In all three experiments reported here user satisfaction was found to be a complex construct comprising several concepts, the distribution of which varied with the nature of the experience. Expectations were found to play an important role in the way users approached a browsing task. Satisfaction and perceived usability was assessed using two methods: scores derived from unstructured interviews and from the Web site Analysis MeasureMent Inventory (WAMMI) rating scales. Scores on these two instruments were somewhat similar, but conclusions drawn across all three experiments differed in terms of satisfaction ratings, suggesting that rating scales and interview statements may tap different interface qualities. Recent research suggests that ‘beauty’, or ‘appeal’ is linked to perceived usability so that what is ‘beautiful’ is also perceived to be usable [Interacting with Computers 13 (2000) 127]. This was true in one experiment here using a web site high in perceived usability and appeal. However, using a site with high appeal but very low in perceived usability yielded very high satisfaction, but low perceived usability scores, suggesting that what is ‘beautiful’ need not also be perceived to be usable. The results suggest that web designers may need to pay attention to both visual appeal and usability. q 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "405cd4bacbcfddc9b4254aee166ee394",
"text": "A fundamental problem for the visual perception of 3D shape is that patterns of optical stimulation are inherently ambiguous. Recent mathematical analyses have shown, however, that these ambiguities can be highly constrained, so that many aspects of 3D structure are uniquely specified even though others might be underdetermined. Empirical results with human observers reveal a similar pattern of performance. Judgments about 3D shape are often systematically distorted relative to the actual structure of an observed scene, but these distortions are typically constrained to a limited class of transformations. These findings suggest that the perceptual representation of 3D shape involves a relatively abstract data structure that is based primarily on qualitative properties that can be reliably determined from visual information.",
"title": ""
},
{
"docid": "758978c4b8f3bdd0a57fe9865892fbc3",
"text": "The foundation of a process model lies in its structural specifications. Using a generic process modeling language for workflows, we show how a structural specification may contain deadlock and lack of synchronization conflicts that could compromise the correct execution of workflows. In general, identification of such conflicts is a computationally complex problem and requires development of effective algorithms specific for the target modeling language. We present a visual verification approach and algorithm that employs a set of graph reduction rules to identify structural conflicts in process models for the given workflow modeling language. We also provide insights into the correctness and complexity of the reduction process. Finally, we show how the reduction algorithm may be used to count possible instance subgraphs of a correct process model. The main contribution of the paper is a new technique for satisfying well-defined correctness criteria in process models.",
"title": ""
},
{
"docid": "7bf64a2dbfa14b52d0ee46d0c61bf8d2",
"text": "Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training.",
"title": ""
},
{
"docid": "a4790fdc5f6469b45fa4a22a871f3501",
"text": "NSGA ( [5]) is a popular non-domination based genetic algorithm for multiobjective optimization. It is a very effective algorithm but has been generally criticized for its computational complexity, lack of elitism and for choosing the optimal parameter value for sharing parameter σshare. A modified version, NSGAII ( [3]) was developed, which has a better sorting algorithm , incorporates elitism and no sharing parameter needs to be chosen a priori. NSGA-II is discussed in detail in this.",
"title": ""
},
{
"docid": "d2f64c21d0a3a54b4a2b75b7dd7df029",
"text": "Library of Congress Cataloging in Publication Data EB. Boston studies in the philosophy of science.The concept of autopoiesis is due to Maturana and Varela 8, 9. The aim of this article is to revisit the concepts of autopoiesis and cognition in the hope of.Amazon.com: Autopoiesis and Cognition: The Realization of the Living Boston Studies in the Philosophy of Science, Vol. 42 9789027710161: H.R. Maturana.Autopoiesis, The Santiago School of Cognition, and. In their early work together Maturana and Varela developed the idea of autopoiesis.Autopoiesis and Cognition: The Realization of the Living Dordecht.",
"title": ""
},
{
"docid": "74c86a2ff975d8298b356f0243e82ab0",
"text": "Building intelligent agents that can communicate with and learn from humans in natural language is of great value. Supervised language learning is limited by the ability of capturing mainly the statistics of training data, and is hardly adaptive to new scenarios or flexible for acquiring new knowledge without inefficient retraining or catastrophic forgetting. We highlight the perspective that conversational interaction serves as a natural interface both for language learning and for novel knowledge acquisition and propose a joint imitation and reinforcement approach for grounded language learning through an interactive conversational game. The agent trained with this approach is able to actively acquire information by asking questions about novel objects and use the justlearned knowledge in subsequent conversations in a one-shot fashion. Results compared with other methods verified the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "cbc9437811bff9a1d96dd5d5f886c598",
"text": "Weakly supervised learning for object detection has been gaining significant attention in the recent past. Visually similar objects are extracted automatically from weakly labelled videos hence bypassing the tedious process of manually annotating training data. However, the problem as applied to small or medium sized objects is still largely unexplored. Our observation is that weakly labelled information can be derived from videos involving human-object interactions. Since the object is characterized neither by its appearance nor its motion in such videos, we propose a robust framework that taps valuable human context and models similarity of objects based on appearance and functionality. Furthermore, the framework is designed such that it maximizes the utility of the data by detecting possibly multiple instances of an object from each video. We show that object models trained in this fashion perform between 86% and 92% of their fully supervised counterparts on three challenging RGB and RGB-D datasets.",
"title": ""
},
{
"docid": "8f3b18f410188ae4f7b09435ce92639e",
"text": "Biogenic amines are important nitrogen compounds of biological importance in vegetable, microbial and animal cells. They can be detected in both raw and processed foods. In food microbiology they have sometimes been related to spoilage and fermentation processes. Some toxicological characteristics and outbreaks of food poisoning are associated with histamine and tyramine. Secondary amines may undergo nitrosation and form nitrosamines. A better knowledge of the factors controlling their formation is necessary in order to improve the quality and safety of food.",
"title": ""
},
{
"docid": "a47d001dc8305885e42a44171c9a94b2",
"text": "Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.",
"title": ""
},
{
"docid": "fb60eb0a7334ce5c5d3c62b812b9f4f8",
"text": "The structure and culture of an organization does affect implementation of projects. In this paper we try to identify organizational factors that could affect the implementation efforts of an Integrated Financial Management Information System (IFMIS). The information system in question has taken overtly a long time and it's not complete yet. We set out to And out whether organizational issues are at play in this particular project. The project under study is a large-scale integrated information system which aims at strengthening and further developing Financial Management Information in the wider public service in Kenya. We borrow concepts from Structuration Theory (ST) as applied in sociology to understand the organizational perspective in the project. We use the theory to help explain some of the meanings, norms and issues of power experienced during the implementation of the IFMIS. Without ruling out problems of technological nature, the findings suggest that many of the problems in the IFMIS implementation may be attributed to organizational factors, and that certain issues are related to the existing organization culture within government.",
"title": ""
}
] | scidocsrr |
4c2f0475c875d7d0d8fe1db66f329323 | Learning to Drive using Inverse Reinforcement Learning and Deep Q-Networks | [
{
"docid": "e0f5f73eb496b77cddc5820fb6306f4b",
"text": "Safe handling of dynamic highway and inner city scenarios with autonomous vehicles involves the problem of generating traffic-adapted trajectories. In order to account for the practical requirements of the holistic autonomous system, we propose a semi-reactive trajectory generation method, which can be tightly integrated into the behavioral layer. The method realizes long-term objectives such as velocity keeping, merging, following, stopping, in combination with a reactive collision avoidance by means of optimal-control strategies within the Frenét-Frame [12] of the street. The capabilities of this approach are demonstrated in the simulation of a typical high-speed highway scenario.",
"title": ""
}
] | [
{
"docid": "723f2257daace86d9cd72d26b59c211d",
"text": "Instead of simply using two-dimensional User × Item features, advanced recommender systems rely on more additional dimensions (e.g. time, location, social network) in order to provide better recommendation services. In the first part of this paper, we will survey a variety of dimension features and show how they are integrated into the recommendation process. When the service providers collect more and more personal information, it brings great privacy concerns to the public. On another side, the service providers could also suffer from attacks launched by malicious users who want to bias the recommendations. In the second part of this paper, we will survey attacks from and against recommender service providers, and existing solutions.",
"title": ""
},
{
"docid": "329cf5a87b554a3eb233bd8227bc78a1",
"text": "Anomaly detection refers to methods that provide warnings of unusual behaviors which may compromise the security and performance of communication networks. In this paper it is proposed a novel model for network anomaly detection combining baseline, K-means clustering and particle swarm optimization (PSO). The baseline consists of network traffic normal behavior profiles, generated by the application of Baseline for Automatic Backbone Management (BLGBA) model in SNMP historical network data set, while K-means is a supervised learning clustering algorithm used to recognize patterns or features in data sets. In order to escape from local optima problem, the K-means is associated to PSO, which is a meta-heuristic whose main characteristics include low computational complexity and small number of input parameters dependence. The proposed anomaly detection approach classifies data clusters from baseline and real traffic using the K-means combined with PSO. Anomalous behaviors can be identified by comparing the distance between real traffic and cluster centroids. Tests were performed in the network of State University of Londrina and the obtained detection and false alarm rates are promising.",
"title": ""
},
{
"docid": "033fb4c857f79fc593bd9a7e12269b49",
"text": "Within any Supply Chain Risk Management (SCRM) approach, the concept “Risk” occupies a central interest. Numerous frameworks which differ by the provided definitions and relationships between supply chain risk dimensions and metrics are available. This article provides an outline of the most common SCRM methodologies, in order to suggest an “integrated conceptual model”. The objective of such an integrated model is not to describe yet another conceptual model of Risk, but rather to offer a concrete structure incorporating the characteristics of the supply chain in the risk management process. The proposed alignment allows a better understanding of the dynamic of risk management strategies. Firstly, the model was analyzed through its positioning and its contributions compared to existing tools and models in the literature. This comparison highlights the critical points overlooked in the past. Secondly, the model was applied on case studies of major supply chain crisis.",
"title": ""
},
{
"docid": "4e50dff9307dcbe43ef8bee9df1f0d1b",
"text": "Research advancements allow computational systems to automatically caption social media images. Often, these captions are evaluated with sighted humans using the image as a reference. Here, we explore how blind and visually impaired people experience these captions in two studies about social media images. Using a contextual inquiry approach (n=6 blind/visually impaired), we found that blind people place a lot of trust in automatically generated captions, filling in details to resolve differences between an image's context and an incongruent caption. We built on this in-person study with a second, larger online experiment (n=100 blind/visually impaired) to investigate the role of phrasing in encouraging trust or skepticism in captions. We found that captions emphasizing the probability of error, rather than correctness, encouraged people to attribute incongruence to an incorrect caption, rather than missing details. Where existing research has focused on encouraging trust in intelligent systems, we conclude by challenging this assumption and consider the benefits of encouraging appropriate skepticism.",
"title": ""
},
{
"docid": "a10aa780d9f1a65461ad0874173d8f56",
"text": "OS fingerprinting tries to identify the type and version of a system based on gathered information of a target host. It is an essential step for many subsequent penetration attempts and attacks. Traditional OS fingerprinting depends on banner grabbing schemes or network traffic analysis results to identify the system. These interactive procedures can be detected by intrusion detection systems (IDS) or fooled by fake network packets. In this paper, we propose a new OS fingerprinting mechanism in virtual machine hypervisors that adopt the memory de-duplication technique. Specifically, when multiple memory pages with the same contents occupy only one physical page, their reading and writing access delay will demonstrate some special properties. We use the accumulated access delay to the memory pages that are unique to some specific OS images to derive out whether or not our VM instance and the target VM are using the same OS. The experiment results on VMware ESXi hypervisor with both Windows and Ubuntu Linux OS images show the practicability of the attack. We also discuss the mechanisms to defend against such attacks by the hypervisors and VMs.",
"title": ""
},
{
"docid": "c4dbf075f91d1a23dda421261911a536",
"text": "In cultures of the Litopenaeus vannamei with biofloc, the concentrations of nitrate rise during the culture period, which may cause a reduction in growth and mortality of the shrimps. Therefore, the aim of this study was to determine the effect of the concentration of nitrate on the growth and survival of shrimp in systems using bioflocs. The experiment consisted of four treatments with three replicates each: The concentrations of nitrate that were tested were 75 (control), 150, 300, and 600 mg NO3 −-N/L. To achieve levels above 75 mg NO3 −-N/L, different dosages of sodium nitrate (PA) were added. For this purpose, twelve experimental units with a useful volume of 45 L were stocked with 15 juvenile L. vannamei (1.30 ± 0.31 g), corresponding to a stocking density of 333 shrimps/m3, that were reared for an experimental period of 42 days. Regarding the water quality parameters measured throughout the study, no significant differences were detected (p > 0.05). Concerning zootechnical performance, a significant difference (p < 0.05) was verified with the 75 (control) and 150 treatments presenting the best performance indexes, while the 300 and 600 treatments led to significantly poorer results (p < 0.05). The histopathological damage was observed in the gills and hepatopancreas of the shrimps exposed to concentrations ≥300 mg NO3 −-N/L for 42 days, and poorer zootechnical performance and lower survival were observed in the shrimps reared at concentrations ≥300 mg NO3 −-N/L under a salinity of 23. The results obtained in this study show that concentrations of nitrate up to 177 mg/L are acceptable for the rearing of L. vannamei in systems with bioflocs, without renewal of water, at a salinity of 23.",
"title": ""
},
{
"docid": "799517016245ffa33a06795b26e308cc",
"text": "The goal of this ”proyecto fin de carrera” was to produce a review of the face detection and face recognition literature as comprehensive as possible. Face detection was included as a unavoidable preprocessing step for face recogntion, and as an issue by itself, because it presents its own difficulties and challenges, sometimes quite different from face recognition. We have soon recognized that the amount of published information is unmanageable for a short term effort, such as required of a PFC, so in agreement with the supervisor we have stopped at a reasonable time, having reviewed most conventional face detection and face recognition approaches, leaving advanced issues, such as video face recognition or expression invariances, for the future work in the framework of a doctoral research. I have tried to gather much of the mathematical foundations of the approaches reviewed aiming for a self contained work, which is, of course, rather difficult to produce. My supervisor encouraged me to follow formalism as close as possible, preparing this PFC report more like an academic report than an engineering project report.",
"title": ""
},
{
"docid": "094dbd57522cb7b9b134b14852bea78b",
"text": "When encountering qualitative research for the first time, one is confronted with both the number of methods and the difficulty of collecting, analysing and presenting large amounts of data. In quantitative research, it is possible to make a clear distinction between gathering and analysing data. However, this distinction is not clear-cut in qualitative research. The objective of this paper is to provide insight for the novice researcher and the experienced researcher coming to grounded theory for the first time. For those who already have experience in the use of the method the paper provides further much needed discussion arising out of デエW マWデエラSげゲ ;Sラヮデキラミ キミ デエW I“ aキWノSく In this paper the authors present a practical application and illustrate how grounded theory method was applied to an interpretive case study research. The paper discusses grounded theory method and provides guidance for the use of the method in interpretive studies.",
"title": ""
},
{
"docid": "4706560ae6318724e6eb487d23804a76",
"text": "Schizophrenia is a complex neurodevelopmental disorder characterized by cognitive deficits. These deficits in cognitive functioning have been shown to relate to a variety of functional and treatment outcomes. Cognitive adaptation training (CAT) is a home-based, manual-driven treatment that utilizes environmental supports and compensatory strategies to bypass cognitive deficits and improve target behaviors and functional outcomes in individuals with schizophrenia. Unlike traditional case management, CAT provides environmental supports and compensatory strategies tailored to meet the behavioral style and neurocognitive deficits of each individual patient. The case of Ms. L. is presented to illustrate CAT treatment.",
"title": ""
},
{
"docid": "e565780704f5c68c985af856d0a53ce0",
"text": "Establishing trust relationships between routing nodes represents a vital security requirement to establish reliable routing processes that exclude infected or selfish nodes. In this paper, we propose a new security scheme for the Internet of things and mainly for the RPL (Routing Protocol for Low-power and Lossy Networks) called: Metric-based RPL Trustworthiness Scheme (MRTS). The primary aim is to enhance RPL security and deal with the trust inference problem. MRTS addresses trust issue during the construction and maintenance of routing paths from each node to the BR (Border Router). To handle this issue, we extend DIO (DODAG Information Object) message by introducing a new trust-based metric ERNT (Extended RPL Node Trustworthiness) and a new Objective Function TOF (Trust Objective Function). In fact, ERNT represents the trust values for each node within the network, and TOF demonstrates how ERNT is mapped to path cost. In MRTS all nodes collaborate to calculate ERNT by taking into account nodes' behavior including selfishness, energy, and honesty components. We implemented our scheme by extending the distributed Bellman-Ford algorithm. Evaluation results demonstrated that the new scheme improves the security of RPL.",
"title": ""
},
{
"docid": "ab7a69accb17ff99642ab225facec95d",
"text": "It is challenging to adopt computing-intensive and parameter-rich Convolutional Neural Networks (CNNs) in mobile devices due to limited hardware resources and low power budgets. To support multiple concurrently running applications, one mobile device needs to perform multiple CNN tests simultaneously in real-time. Previous solutions cannot guarantee a high enough frame rate when serving multiple applications with reasonable hardware and power cost. In this paper, we present a novel process-in-memory architecture to process emerging binary CNN tests in Wide-IO2 DRAMs. Compared to state-of-the-art accelerators, our design improves CNN test performance by 4× ∼ 11× with small hardware and power overhead.",
"title": ""
},
{
"docid": "d9f8858915ea3881763a0f8064102998",
"text": "Digital signatures are one of the fundamental security primitives in Vehicular Ad-Hoc Networks (VANETs) because they provide authenticity and non-repudiation in broadcast communication. However, the current broadcast authentication standard in VANETs is vulnerable to signature flooding: excessive signature verification requests that exhaust the computational resources of victims. In this paper, we propose two efficient broadcast authentication schemes, Fast Authentication (FastAuth) and Selective Authentication (SelAuth), as two countermeasures to signature flooding. FastAuth secures periodic single-hop beacon messages. By exploiting the sender's ability to predict its own future beacons, FastAuth enables 50 times faster verification than previous mechanisms using the Elliptic Curve Digital Signature Algorithm. SelAuth secures multi-hop applications in which a bogus signature may spread out quickly and impact a significant number of vehicles. SelAuth pro- vides fast isolation of malicious senders, even under a dynamic topology, while consuming only 15%--30% of the computational resources compared to other schemes. We provide both analytical and experimental evaluations based on real traffic traces and NS-2 simulations. With the near-term deployment plans of VANET on all vehicles, our approaches can make VANETs practical.",
"title": ""
},
{
"docid": "9aae377bf3ebb202b13fab2cbd85f1ce",
"text": "The paper describes a rule-based information extraction (IE) system developed for Polish medical texts. We present two applications designed to select data from medical documentation in Polish: mammography reports and hospital records of diabetic patients. First, we have designed a special ontology that subsequently had its concepts translated into two separate models, represented as typed feature structure (TFS) hierarchies, complying with the format required by the IE platform we adopted. Then, we used dedicated IE grammars to process documents and fill in templates provided by the models. In particular, in the grammars, we addressed such linguistic issues as: ambiguous keywords, negation, coordination or anaphoric expressions. Resolving some of these problems has been deferred to a post-processing phase where the extracted information is further grouped and structured into more complex templates. To this end, we defined special heuristic algorithms on the basis of sample data. The evaluation of the implemented procedures shows their usability for clinical data extraction tasks. For most of the evaluated templates, precision and recall well above 80% were obtained.",
"title": ""
},
{
"docid": "e4c27a97a355543cf113a16bcd28ca50",
"text": "A metamaterial-based broadband low-profile grid-slotted patch antenna is presented. By slotting the radiating patch, a periodic array of series capacitor loaded metamaterial patch cells is formed, and excited through the coupling aperture in a ground plane right underneath and parallel to the slot at the center of the patch. By exciting two adjacent resonant modes simultaneously, broadband impedance matching and consistent radiation are achieved. The dispersion relation of the capacitor-loaded patch cell is applied in the mode analysis. The proposed grid-slotted patch antenna with a low profile of 0.06 λ0 (λ0 is the center operating wavelength in free space) achieves a measured bandwidth of 28% for the |S11| less than -10 dB and maximum gain of 9.8 dBi.",
"title": ""
},
{
"docid": "47da8530df2160ee29ff05aee4ab0342",
"text": "The objective of this review was to update Sobal and Stunkard's exhaustive review of the literature on the relation between socioeconomic status (SES) and obesity (Psychol Bull 1989;105:260-75). Diverse research databases (including CINAHL, ERIC, MEDLINE, and Social Science Abstracts) were comprehensively searched during the years 1988-2004 inclusive, using \"obesity,\" \"socioeconomic status,\" and synonyms as search terms. A total of 333 published studies, representing 1,914 primarily cross-sectional associations, were included in the review. The overall pattern of results, for both men and women, was of an increasing proportion of positive associations and a decreasing proportion of negative associations as one moved from countries with high levels of socioeconomic development to countries with medium and low levels of development. Findings varied by SES indicator; for example, negative associations (lower SES associated with larger body size) for women in highly developed countries were most common with education and occupation, while positive associations for women in medium- and low-development countries were most common with income and material possessions. Patterns for women in higher- versus lower-development countries were generally less striking than those observed by Sobal and Stunkard; this finding is interpreted in light of trends related to globalization. Results underscore a view of obesity as a social phenomenon, for which appropriate action includes targeting both economic and sociocultural factors.",
"title": ""
},
{
"docid": "4d331769ca3f02e9ec96e172d98f3fab",
"text": "This review focuses on the most recent applications of zinc oxide (ZnO) nanostructures for tissue engineering. ZnO is one of the most investigated metal oxides, thanks to its multifunctional properties coupled with the ease of preparing various morphologies, such as nanowires, nanorods, and nanoparticles. Most ZnO applications are based on its semiconducting, catalytic and piezoelectric properties. However, several works have highlighted that ZnO nanostructures may successfully promote the growth, proliferation and differentiation of several cell lines, in combination with the rise of promising antibacterial activities. In particular, osteogenesis and angiogenesis have been effectively demonstrated in numerous cases. Such peculiarities have been observed both for pure nanostructured ZnO scaffolds as well as for three-dimensional ZnO-based hybrid composite scaffolds, fabricated by additive manufacturing technologies. Therefore, all these findings suggest that ZnO nanostructures represent a powerful tool in promoting the acceleration of diverse biological processes, finally leading to the formation of new living tissue useful for organ repair.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "104d16c298c8790ca8da0df4d7e34a4b",
"text": "musical structure of a culture or genre” (Bharucha 1984, p. 421). So, unlike tonal hierarchies that refer to cognitive representations of the structure of music across different pieces of music in the style, event hierarchies refer to a particular piece of music and the place of each event in that piece. The two hierarchies occupy complementary roles. In listening to music or music-like experimental materials (melodies and harmonic progressions), the listener responds both to the structure provided by the tonal hierarchy and the structure provided by the event hierarchy. Musical activity involves dynamic patterns of stability and instability to which both the tonal and event hierarchies contribute. Understanding the relations between them and their interaction in processing musical structure is a central issue, not yet extensively studied empirically. 3.3 Empirical Research: The Basic Studies This section outlines the classic findings that illustrate tonal relationships and the methodologies used to establish these findings. 3.3.1 The Probe Tone Method Quantification is the first step in empirical studies because it makes possible the kinds of analytic techniques needed to understand complex human behaviors. An experimental method that has been used to quantify the tonal hierarchy is called the probe-tone method (Krumhansl and Shepard 1979). It was based on the observation that if you hear the incomplete ascending C major scale, C-D-E-F-G-A-B, you strongly expect that the next tone will be the high C. It is the next logical tone in the series, proximal to the last tone of the context, B, and it is the tonic of the key. When, in the experiment, incomplete ascending and descending scale contexts were followed by the tone C (the probe tone), listeners rated it highly as to how well it completed the scale (1 = very badly, 7 = very well). Other probe tones, however, also received fairly high ratings, and they were not necessarily those that are close in pitch to the last tone of the context. For example, the more musically trained listeners also gave high ratings to the dominant, G, and the mediant, E, which together with the C form the tonic triad. The tones of the scale received higher ratings than the nonscale tones, C# D# F# G# and A#. Less musically trained listeners were more influenced by how close the probe tone was to the tone sounded most recently at the end of the context, although their ratings also contained some of the tonal hierarchy pattern. A subsequent study used this method with a variety of contexts at the beginning of the trials (Krumhansl and Kessler 1982). Contexts were chosen because they are clear indicators of a key. They included the scale, the tonic triad chord, and chord 56 C.L. Krumhansl and L.L. Cuddy sequences strongly defining major and minor keys. These contexts were followed by all possible probe tones in the 12-tone chromatic scale, which musically trained listeners were instructed to judge in terms of how well they fit with the preceding context in a musical sense. The results for contexts of the same mode (major or minor) were similar when transposed to a common tonic. Also, the results were largely independent of which particular type of context was used (e.g., chord versus chord cadence). Consequently, the rating data were transposed to a common tonic and averaged over the context types. The resulting values are termed standardized key profiles. The values for the major key profile are 6.35, 2.23, 3.48, 2.33, 4.38, 4.09, 2.52, 5.19, 2.39, 3.66, 2.29, 2.88, where the first number corresponds to the mean rating for the tonic of the key, the second to the next of the 12 tones in the chromatic scale, and so on. The values for the minor key context are 6.33, 2.68, 3.52, 5.38, 2.60, 3.53, 2.54, 4.75, 3.98, 2.69, 3.34, 3.17. These are plotted in Fig. 3.1, in which C is assumed to be the tonic. Both major and minor contexts produce clear and musically interpretable hierarchies in the sense that tones are ordered or ranked according to music-theoretic descriptions. The results of these initial studies suggested that it is possible to obtain quantitative judgments of the degree to which different tones are perceived as stable reference tones in musical contexts. The task appeared to be accessible to listeners who differed considerably in their music training. This was important for further investigations of the responses of listeners without knowledge of specialized vocabularies for describing music, or who were unfamiliar with the musical style. Finally, the results in these and many subsequent studies were quite consistent over a variety of task instructions and musical contexts used to induce a sense of key. Quantification Fig. 3.1 (a) Probe tone ratings for a C major context. (b) Probe tone ratings for a C minor context. Values from Krumhansl and Kessler (1982) 57 3 A Theory of Tonal Hierarchies in Music of the tonal hierarchies is an important first step in empirical research but, as seen later, a great deal of research has studied it from a variety of different perspectives. 3.3.2 Converging Evidence To substantiate any theoretical construct, such as the tonal hierarchy, it is important to have evidence from experiments using different methods. This strategy is known as “converging operations” (Garner et al. 1956). This section describes a number of other experimental measures that show influences of the tonal hierarchy. It has an effect on the degree to which tones are perceived as similar to one another (Krumhansl 1979), such that tones high in the hierarchy are perceived as relatively similar to one another. For example, in the key of C major, C and G are perceived as highly related, whereas C# and G# are perceived as distantly related, even though they are just as far apart objectively (in semitones). In addition, a pair of tones is heard as more related when the second is more stable in the tonal hierarchy than the first (compared to the reverse order). For example, the tones F#-G are perceived as more related to one another than are G-F# because G is higher in the tonal hierarchy than F#. Similar temporal-order asymmetries also appear in memory studies. For example, F# is more often confused with G than G is confused with F# (Krumhansl 1979). These data reflect the proposition that each tone is drawn toward, or expected to resolve to, a tone of greater stability in the tonal hierarchy. Janata and Reisberg (1988) showed that the tonal hierarchy also influenced reaction time measures in tasks requiring a categorical judgment about a tone’s key membership. For both scale and chord contexts, faster reaction times (in-key/outof-key) were obtained for tones higher in the hierarchy. In addition, a recency effect was found for the scale context as for the nonmusicians in the original probe tone study (Krumhansl and Shepard 1979). Miyazaki (1989) found that listeners with absolute pitch named tones highest in tonal hierarchy of C major faster and more accurately than other tones. This is remarkable because it suggests that musical training has a very specific effect on the acquisition of absolute pitch. Most of the early piano repertoire is written in the key of C major and closely related keys. All of these listeners began piano lessons as young as 3–5 years of age, and were believed to have acquired absolute pitch through exposure to piano tones. The tonal hierarchy also appears in judgments of what tone constitutes a good phrase ending (Palmer and Krumhansl 1987a, b; Boltz 1989a, b). A number of studies show that the tonal hierarchy is one of the factors that influences expectations for melodic continuations (Schmuckler 1989; Krumhansl 1991, 1995b; Cuddy and Lunney 1995; Krumhansl et al. 1999, 2000). Other factors include pitch proximity, interval size, and melodic direction. The influence of the tonal hierarchy has also been demonstrated in a study of expressive piano performance (Thompson and Cuddy 1997). Expression refers to 58 C.L. Krumhansl and L.L. Cuddy the changes in duration and dynamics (loudness) that performers add beyond the notated music. For the harmonized sequences used in their study, the performance was influenced by the tonal hierarchy. Tones that were tonally stable within a key (higher in the tonal hierarchy) tended to be played for longer duration in the melody than those less stable (lower in the tonal hierarchy). A method used more recently (Aarden 2003, described in Huron 2006) is a reaction-time task in which listeners had to judge whether unfamiliar melodies went up, down, or stayed the same (a tone was repeated). The underlying idea is that reaction times should be faster when the tone conforms to listeners’ expectations. His results confirmed this hypothesis, namely, that reaction times were faster for tones higher in the hierarchy. As described later, his data conformed to a very large statistical analysis he did of melodies in major and minor keys. Finally, tonal expectations result in event-related potentials (ERPs), changes in electrical potentials measured on the surface of the head (Besson and Faïta 1995; Besson et al. 1998). A larger P300 component, a positive change approximately 300 ms after the final tone, was found when a melody ended with a tone out of the scale of its key than a tone in the scale. This finding was especially true for musicians and familiar melodies, suggesting that learning plays some role in producing the effect; however, the effect was also present in nonmusicians, only to a lesser degree. This section has cited only a small proportion of the studies that have been conducted on tonal hierarchies. A closely related issue that has also been studied extensively is the existence of, and the effects of, a hierarchy of chords. The choice of the experiments reviewed here was to illustrate the variety of approaches that have been taken. Across the studies, consistent effects were found with many different kinds of experimental",
"title": ""
},
{
"docid": "1a7dd0fb317a9640ee6e90036d6036fa",
"text": "A genome-wide association study was performed to identify genetic factors involved in susceptibility to psoriasis (PS) and psoriatic arthritis (PSA), inflammatory diseases of the skin and joints in humans. 223 PS cases (including 91 with PSA) were genotyped with 311,398 single nucleotide polymorphisms (SNPs), and results were compared with those from 519 Northern European controls. Replications were performed with an independent cohort of 577 PS cases and 737 controls from the U.S., and 576 PSA patients and 480 controls from the U.K.. Strongest associations were with the class I region of the major histocompatibility complex (MHC). The most highly associated SNP was rs10484554, which lies 34.7 kb upstream from HLA-C (P = 7.8x10(-11), GWA scan; P = 1.8x10(-30), replication; P = 1.8x10(-39), combined; U.K. PSA: P = 6.9x10(-11)). However, rs2395029 encoding the G2V polymorphism within the class I gene HCP5 (combined P = 2.13x10(-26) in U.S. cases) yielded the highest ORs with both PS and PSA (4.1 and 3.2 respectively). This variant is associated with low viral set point following HIV infection and its effect is independent of rs10484554. We replicated the previously reported association with interleukin 23 receptor and interleukin 12B (IL12B) polymorphisms in PS and PSA cohorts (IL23R: rs11209026, U.S. PS, P = 1.4x10(-4); U.K. PSA: P = 8.0x10(-4); IL12B:rs6887695, U.S. PS, P = 5x10(-5) and U.K. PSA, P = 1.3x10(-3)) and detected an independent association in the IL23R region with a SNP 4 kb upstream from IL12RB2 (P = 0.001). Novel associations replicated in the U.S. PS cohort included the region harboring lipoma HMGIC fusion partner (LHFP) and conserved oligomeric golgi complex component 6 (COG6) genes on chromosome 13q13 (combined P = 2x10(-6) for rs7993214; OR = 0.71), the late cornified envelope gene cluster (LCE) from the Epidermal Differentiation Complex (PSORS4) (combined P = 6.2x10(-5) for rs6701216; OR 1.45) and a region of LD at 15q21 (combined P = 2.9x10(-5) for rs3803369; OR = 1.43). This region is of interest because it harbors ubiquitin-specific protease-8 whose processed pseudogene lies upstream from HLA-C. This region of 15q21 also harbors the gene for SPPL2A (signal peptide peptidase like 2a) which activates tumor necrosis factor alpha by cleavage, triggering the expression of IL12 in human dendritic cells. We also identified a novel PSA (and potentially PS) locus on chromosome 4q27. This region harbors the interleukin 2 (IL2) and interleukin 21 (IL21) genes and was recently shown to be associated with four autoimmune diseases (Celiac disease, Type 1 diabetes, Grave's disease and Rheumatoid Arthritis).",
"title": ""
},
{
"docid": "40d8c7f1d24ef74fa34be7e557dca920",
"text": "the rapid changing Internet environment has formed a competitive business setting, which provides opportunities for conducting businesses online. Availability of online transaction systems enable users to buy and make payment for products and services using the Internet platform. Thus, customers’ involvements in online purchasing have become an important trend. However, since the market is comprised of many different people and cultures, with diverse viewpoints, e-commerce businesses are being challenged by the reality of complex behavior of consumers. Therefore, it is vital to identify the factors that affect consumers purchasing decision through e-commerce in respective cultures and societies. In response to this claim, the purpose of this study is to explore the factors affecting customers’ purchasing decision through e-commerce (online shopping). Several factors such as trust, satisfaction, return policy, cash on delivery, after sale service, cash back warranty, business reputation, social and individual attitude, are considered. At this stage, the factors mentioned above, which are commonly considered influencing purchasing decision through online shopping in literature, are hypothesized to measure the causal relationship within the framework.",
"title": ""
}
] | scidocsrr |
34f55fae069b6bbe6f1ca9a850542add | A Deep Learning Driven Active Framework for Segmentation of Large 3D Shape Collections | [
{
"docid": "e8eaeb8a2bb6fa71997aa97306bf1bb0",
"text": "Article history: Available online 18 February 2016",
"title": ""
},
{
"docid": "87af466921c1c6a48518859e09e88fa8",
"text": "Ensembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural network, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with traditional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4% and 17.4% respectively.",
"title": ""
}
] | [
{
"docid": "8093101949a96d27082712ce086bf11f",
"text": "Transition-based dependency parsers often need sequences of local shift and reduce operations to produce certain attachments. Correct individual decisions hence require global information about the sentence context and mistakes cause error propagation. This paper proposes a novel transition system, arc-swift, that enables direct attachments between tokens farther apart with a single transition. This allows the parser to leverage lexical information more directly in transition decisions. Hence, arc-swift can achieve significantly better performance with a very small beam size. Our parsers reduce error by 3.7–7.6% relative to those using existing transition systems on the Penn Treebank dependency parsing task and English Universal Dependencies.",
"title": ""
},
{
"docid": "d43e38bca0289c612c429f497171713c",
"text": "Due to the unprecedented growth of unedited videos, finding highlights relevant to a text query in a set of unedited videos has become increasingly important. We refer this task as semantic highlight retrieval and propose a query-dependent video representation for retrieving a variety of highlights. Our method consists of two parts: 1) “viralets”, a mid-level representation bridging between semantic [Fig. 1(a)] and visual [Fig. 1(c)] spaces and 2) a novel Semantic-MODulation (SMOD) procedure to make viralets query-dependent (referred to as SMOD viralets). Given SMOD viralets, we train a single highlight ranker to predict the highlightness of clips with respect to a variety of queries (two examples in Fig. 1), whereas existing approaches can be applied only in a few predefined domains. Other than semantic highlight retrieval, viralets can also be used to associate relevant terms to each video. We utilize this property and propose a simple term prediction method based on nearest neighbor search. To conduct experiments, we collect a viral video dataset1 including users’ comments, highlights, and/or original videos. Among a testing database with 1189 clips (13% highlights and 87% non-highlights), our highlight ranker achieves 41.2% recall at top-10 retrieved clips. It is significantly higher than the state-of-the-art domain-specific highlight ranker and its extension. Similarly, our method also outperforms all baseline methods on the publicly available video highlight dataset. Finally, our simple term prediction method utilizing viralets outperforms the state-of-the-art matrix factorization method (adapted from Kalayeh et al.). 1 Viral videos refer to popular online videos. We focus on user-generated viral videos, which typically contain short highlight marked by users.",
"title": ""
},
{
"docid": "29f17b7d7239a2845d513976e4981d6a",
"text": "Agriculture is the backbone of the Indian economy. As all know that demand of agricultural products are increasing day by day as the population is ever increasing, so there is a need to minimize labor, limit the use of water and increase the production of crops. So there is a need to switch from traditional agriculture to the modern agriculture. The introduction of internet of things into agriculture modernization will help solve these problems. This paper presents the IOT based agriculture production system which will monitor or analyze the crop environment like temperature humidity and moisture content in soil. This paper uses the integration of RFID technology and sensors. As both have different objective sensors are for sensing and RIFD technology is for identification This will effectively solve the problem of farmer, increase the yield and saves his time, power, money.",
"title": ""
},
{
"docid": "47fccbf00b2caaad529d660073b7e9a0",
"text": "The rapidly increasing popularity of community-based Question Answering (cQA) services, e.g. Yahoo! Answers, Baidu Zhidao, etc. have attracted great attention from both academia and industry. Besides the basic problems, like question searching and answer finding, it should be noted that the low participation rate of users in cQA service is the crucial problem which limits its development potential. In this paper, we focus on addressing this problem by recommending answer providers, in which a question is given as a query and a ranked list of users is returned according to the likelihood of answering the question. Based on the intuitive idea for recommendation, we try to introduce topic-level model to improve heuristic term-level methods, which are treated as the baselines. The proposed approach consists of two steps: (1) discovering latent topics in the content of questions and answers as well as latent interests of users to build user profiles; (2) recommending question answerers for new arrival questions based on latent topics and term-level model. Specifically, we develop a general generative model for questions and answers in cQA, which is then altered to obtain a novel computationally tractable Bayesian network model. Experiments are carried out on a real-world data crawled from Yahoo! Answers during Jun 12 2007 to Aug 04 2007, which consists of 118510 questions, 772962 answers and 150324 users. The experimental results reveal significant improvements over the baseline methods and validate the positive influence of topic-level information.",
"title": ""
},
{
"docid": "30eb03eca06dcc006a28b5e00431d9ed",
"text": "We present for the first time a μW-power convolutional neural network for seizure detection running on a low-power microcontroller. On a dataset of 22 patients a median sensitivity of 100% is achieved. With a false positive rate of 20.7 fp/h and a short detection delay of 3.4 s it is suitable for the application in an implantable closed-loop device.",
"title": ""
},
{
"docid": "88a052d1e6e5d6776711b58e0711869d",
"text": "We are in the midst of a revolution in military affairs (RMA) unlike any seen since the Napoleonic Age, when France transformed warfare with the concept of levŽe en masse. Chief of Naval Operations Admiral Jay Johnson has called it \"a fundamental shift from what we call platform-centric warfare to something we call network-centric warfare,\" and it will prove to be the most important RMA in the past 200 years.",
"title": ""
},
{
"docid": "2f20bca0134eb1bd9d65c4791f94ddcc",
"text": "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"title": ""
},
{
"docid": "b2b4e5162b3d7d99a482f9b82820d59e",
"text": "Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights create a new attack surface, which can be maliciously used to violate users’ privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs in order to infer users’ private data and preferences. The first two attacks are designed to infer users’ audio and video playback by a systematic observation and analysis of the multimediavisualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user’s private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms.",
"title": ""
},
{
"docid": "d5c4e44514186fa1d82545a107e87c94",
"text": "Recent research in computer vision has increasingly focused on building systems for observing humans and understanding their look, activities, and behavior providing advanced interfaces for interacting with humans, and creating sensible models of humans for various purposes. This paper presents a new algorithm for detecting moving objects from a static background scene based on frame difference. Firstly, the first frame is captured through the static camera and after that sequence of frames is captured at regular intervals. Secondly, the absolute difference is calculated between the consecutive frames and the difference image is stored in the system. Thirdly, the difference image is converted into gray image and then translated into binary image. Finally, morphological filtering is done to remove noise.",
"title": ""
},
{
"docid": "d1e9eb1357381310c4540a6dcbe8973a",
"text": "We introduce a method for learning Bayesian networks that handles the discretization of continuous variables as an integral part of the learning process. The main ingredient in this method is a new metric based on the Minimal Description Length principle for choosing the threshold values for the discretization while learning the Bayesian network structure. This score balances the complexity of the learned discretization and the learned network structure against how well they model the training data. This ensures that the discretization of each variable introduces just enough intervals to capture its interaction with adjacent variables in the network. We formally derive the new metric, study its main properties, and propose an iterative algorithm for learning a discretization policy. Finally, we illustrate its behavior in applications to supervised learning.",
"title": ""
},
{
"docid": "896500db22d621abf1a0fd88cedc8483",
"text": "The motion analysis of human skeletons is crucial for human action recognition, which is one of the most active topics in computer vision. In this paper, we propose a fully end-to-end action-attending graphic neural network (A2GNN) for skeleton-based action recognition, in which each irregular skeleton is structured as an undirected attribute graph. To extract high-level semantic representation from skeletons, we perform the local spectral graph filtering on the constructed attribute graphs like the standard image convolution operation. Considering not all joints are informative for action analysis, we design an action-attending layer to detect those salient action units by adaptively weighting skeletal joints. Herein, the filtering responses are parameterized into a weighting function irrelevant to the order of input nodes. To further encode continuous motion variations, the deep features learnt from skeletal graphs are gathered along consecutive temporal slices and then fed into a recurrent gated network. Finally, the spectral graph filtering, action-attending, and recurrent temporal encoding are integrated together to jointly train for the sake of robust action recognition as well as the intelligibility of human actions. To evaluate our A2GNN, we conduct extensive experiments on four benchmark skeleton-based action datasets, including the large-scale challenging NTU RGB+D dataset. The experimental results demonstrate that our network achieves the state-of-the-art performances.",
"title": ""
},
{
"docid": "7c10a44e5fa0f9e01951e89336c4b4d6",
"text": "Previous studies have examined the online research behaviors of graduate students in terms of how they seek and retrieve research-related information on the Web across diverse disciplines. However, few have focused on graduate students’ searching activities, and particularly for their research tasks. Drawing on Kuiper, Volman, and Terwel’s (2008) three aspects of web literacy skills (searching, reading, and evaluating), this qualitative study aims to better understand a group of graduate engineering students’ searching, reading, and evaluating processes for research purposes. Through in-depth interviews and the think-aloud protocol, we compared the strategies employed by 22 Taiwanese graduate engineering students. The results showed that the students’ online research behaviors included seeking and obtaining, reading and interpreting, and assessing and evaluating sources. The findings suggest that specialized training for preparing novice researchers to critically evaluate relevant information or scholarly work to fulfill their research purposes is needed. Implications for enhancing the information literacy of engineering students are discussed.",
"title": ""
},
{
"docid": "b59b5bfb0758a07a72c6bbd7f90212e0",
"text": "The ease with which digital images can be manipulated without severe degradation of quality makes it necessary to be able to verify the authenticity of digital images. One way to establish the image authenticity is by computing a hash sequence from an image. This hash sequence must be robust against non content-altering manipulations, but must be able to show if the content of the image has been tampered with. Furthermore, the hash has to have enough differentiating power such that the hash sequences from two different images are not similar. This paper presents an image hashing system based on local Histogram of Oriented Gradients. The system is shown to have good differentiating power, robust against non content-altering manipulations such as filtering and JPEG compression and is sensitive to content-altering attacks.",
"title": ""
},
{
"docid": "4b6da0b9c88f4d94abfbbcb08bb0fc43",
"text": "In this paper we show how word embeddings can be used to increase the effectiveness of a state-of-the art Locality Sensitive Hashing (LSH) based first story detection (FSD) system over a standard tweet corpus. Vocabulary mismatch, in which related tweets use different words, is a serious hindrance to the effectiveness of a modern FSD system. In this case, a tweet could be flagged as a first story even if a related tweet, which uses different but synonymous words, was already returned as a first story. In this work, we propose a novel approach to mitigate this problem of lexical variation, based on tweet expansion. In particular, we propose to expand tweets with semantically related paraphrases identified via automatically mined word embeddings over a background tweet corpus. Through experimentation on a large data stream comprised of 50 million tweets, we show that FSD effectiveness can be improved by 9.5% over a state-of-the-art FSD system.",
"title": ""
},
{
"docid": "14077e87744089bb731085590be99a75",
"text": "The Vehicle Routing Problem (VRP) is an important problem occurring in many logistics systems. The objective of VRP is to serve a set of customers at minimum cost, such that every node is visited by exactly one vehicle only once. In this paper, we consider the Dynamic Vehicle Routing Problem (DVRP) which new customer demands are received along the day. Hence, they must be serviced at their locations by a set of vehicles in real time minimizing the total travel distance. The main goal of this research is to find a solution of DVRP using genetic algorithm. However we used some heuristics in addition during generation of the initial population and crossover for tuning the system to obtain better result. The computational experiments were applied to 22 benchmarks instances with up to 385 customers and the effectiveness of the proposed approach is validated by comparing the computational results with those previously presented in the literature.",
"title": ""
},
{
"docid": "e4a3dfe53a66d0affd73234761e7e0e2",
"text": "BACKGROUND\nWhether cannabis can cause psychotic or affective symptoms that persist beyond transient intoxication is unclear. We systematically reviewed the evidence pertaining to cannabis use and occurrence of psychotic or affective mental health outcomes.\n\n\nMETHODS\nWe searched Medline, Embase, CINAHL, PsycINFO, ISI Web of Knowledge, ISI Proceedings, ZETOC, BIOSIS, LILACS, and MEDCARIB from their inception to September, 2006, searched reference lists of studies selected for inclusion, and contacted experts. Studies were included if longitudinal and population based. 35 studies from 4804 references were included. Data extraction and quality assessment were done independently and in duplicate.\n\n\nFINDINGS\nThere was an increased risk of any psychotic outcome in individuals who had ever used cannabis (pooled adjusted odds ratio=1.41, 95% CI 1.20-1.65). Findings were consistent with a dose-response effect, with greater risk in people who used cannabis most frequently (2.09, 1.54-2.84). Results of analyses restricted to studies of more clinically relevant psychotic disorders were similar. Depression, suicidal thoughts, and anxiety outcomes were examined separately. Findings for these outcomes were less consistent, and fewer attempts were made to address non-causal explanations, than for psychosis. A substantial confounding effect was present for both psychotic and affective outcomes.\n\n\nINTERPRETATION\nThe evidence is consistent with the view that cannabis increases risk of psychotic outcomes independently of confounding and transient intoxication effects, although evidence for affective outcomes is less strong. The uncertainty about whether cannabis causes psychosis is unlikely to be resolved by further longitudinal studies such as those reviewed here. However, we conclude that there is now sufficient evidence to warn young people that using cannabis could increase their risk of developing a psychotic illness later in life.",
"title": ""
},
{
"docid": "d03dbec2a7361aaa41097703654e6a5d",
"text": "1Department of Computer Science Electrical Engineering, University of Missouri-Kansas City, Kansas City, MO 64110, USA 2Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 62102, Taiwan 3Key Laboratory of Network Security and Cryptology, Fujian Normal University, Fujian 350007, P. R. China 4Department of Information Engineering and Computer Science, Feng Chia University, No. 100, Wenhwa Rd., Xitun Dist., Taichung 40724, Taiwan ∗Corresponding author: ccc@cs.ccu.edu.tw",
"title": ""
},
{
"docid": "a2a63b4e7864a6a7aa057d2addf50065",
"text": "Research in automatic analysis of sign language has largely focused on recognizing the lexical (or citation) form of sign gestures as they appear in continuous signing, and developing algorithms that scale well to large vocabularies. However, successful recognition of lexical signs is not sufficient for a full understanding of sign language communication. Nonmanual signals and grammatical processes which result in systematic variations in sign appearance are integral aspects of this communication but have received comparatively little attention in the literature. In this survey, we examine data acquisition, feature extraction and classification methods employed for the analysis of sign language gestures. These are discussed with respect to issues such as modeling transitions between signs in continuous signing, modeling inflectional processes, signer independence, and adaptation. We further examine works that attempt to analyze nonmanual signals and discuss issues related to integrating these with (hand) sign gestures. We also discuss the overall progress toward a true test of sign recognition systems--dealing with natural signing by native signers. We suggest some future directions for this research and also point to contributions it can make to other fields of research. Web-based supplemental materials (appendicies) which contain several illustrative examples and videos of signing can be found at www.computer.org/publications/dlib.",
"title": ""
},
{
"docid": "48b88774957a6d30ae9d0a97b9643647",
"text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features",
"title": ""
}
] | scidocsrr |
4416ba7d54f47b41f654c4358ef2d632 | Non-rigid Object Tracking via Deformable Patches Using Shape-Preserved KCF and Level Sets | [
{
"docid": "198311a68ad3b9ee8020b91d0b029a3c",
"text": "Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods.",
"title": ""
},
{
"docid": "85a076e58f4d117a37dfe6b3d68f5933",
"text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.",
"title": ""
}
] | [
{
"docid": "eb766409144157d20fd0c709b3d92035",
"text": "Primary human lymphedema (Milroy's disease), characterized by a chronic and disfiguring swelling of the extremities, is associated with heterozygous inactivating missense mutations of the gene encoding vascular endothelial growth factor C/D receptor (VEGFR-3). Here, we describe a mouse model and a possible treatment for primary lymphedema. Like the human patients, the lymphedema (Chy) mice have an inactivating Vegfr3 mutation in their germ line, and swelling of the limbs because of hypoplastic cutaneous, but not visceral, lymphatic vessels. Neuropilin (NRP)-2 bound VEGF-C and was expressed in the visceral, but not in the cutaneous, lymphatic endothelia, suggesting that it may participate in the pathogenesis of lymphedema. By using virus-mediated VEGF-C gene therapy, we were able to generate functional lymphatic vessels in the lymphedema mice. Our results suggest that growth factor gene therapy is applicable to human lymphedema and provide a paradigm for other diseases associated with mutant receptors.",
"title": ""
},
{
"docid": "76ce7807d5afcb5fb5e1d4bf65d01489",
"text": "Tile antiradical activities of various antioxidants were determined using the free radical, 2.2-Diphenyl-l-pict3,1hydrazyl (DPPI-I°). In its radical form, DPPI-I ° has an absorption band at 515 nm which disappears upon reduction by an antiradical compound. Twenty compounds were reacted with the DPPI-I ° and shown to follow one of three possible reaction kinetic types. Ascorbie acid, isoascorbic acid and isoeugenol reacted quickly with the DPPI-I ° reaching a steady state immediately. Rosmarinic acid and 6-tocopherol reacted a little slower and reached a steady state within 30 rain. The remaining compounds reacted more progressively with the DPPH ° reaching a steady state from I to 6 h. Caffeic acid, gentisic acid and gallic acid showed the highest antiradical activities with a stoichiometo, of 4 to 6 reduced DPPH ° molecules pet\" molecule of antioxidant. Vanillin, phenol, y-resort3'lic acid and vanillic acid were found to be poor antiradical compounds. The stoichiometry, for the other 13 phenolic compounds varied from one to three reduced DPPH ° molecules pet\" molecule of antioxidant. Possible mechanisms are proposed to explain the e.werimental results.",
"title": ""
},
{
"docid": "834af0b828702aae0482a2e31e3f8a40",
"text": "We routinely hear vendors claim that their systems are “secure.” However, without knowing what assumptions are made by the vendor, it is hard to justify such a claim. Prior to claiming the security of a system, it is important to identify the threats to the system in question. Enumerating the threats to a system helps system architects develop realistic and meaningful security requirements. In this paper, we investigate how threat modeling can be used as foundations for the specification of security requirements. Although numerous works have been published on threat modeling, there is a lack of integrated, systematic approach toward threat modeling for complex systems. We examine the differences between modeling software products and complex systems, and outline our approach for identifying threats of networked systems. We also present three case studies of threat modeling: Software-Defined Radio, a network traffic monitoring tool (VisFlowConnect), and a cluster security monitoring tool (NVisionCC).",
"title": ""
},
{
"docid": "56ec3abe17259cae868e17dc2163fc0e",
"text": "This paper reports a case study about lessons learned and usability issues encountered in a usability inspection of a digital library system called the Networked Computer Science Technical Reference Library (NCSTRL). Using a co-discovery technique with a team of three expert usability inspectors (the authors), we performed a usability inspection driven by a broad set of anticipated user tasks. We found many good design features in NCSTRL, but the primary result of a usability inspection is a list of usability problems as candidates for fixing. The resulting problems are organized by usability problem type and by system functionality, with emphasis on the details of problems specific to digital library functions. The resulting usability problem list was used to illustrate a cost/importance analysis technique that trades off importance to fix against cost to fix. The problems are sorted by the ratio of importance to cost, producing a priority ranking for resolution.",
"title": ""
},
{
"docid": "f649db3b6fa6a929ac0434b12ddeea54",
"text": "The rapid growth of e-Commerce amongst private sectors and Internet usage amongst citizens has vastly stimulated e-Government initiatives from many countries. The Thailand e-Government initiative is based on the government's long-term strategic policy that aims to reform and overhaul the Thai bureaucracy. This study attempted to identify the e-Excise success factors by employing the IS success model. The study focused on finding the factors that may contribute to the success of the e-Excise initiative. The Delphi Technique was used to investigate the determinant factors for the success of the e-Excise initiative. Three-rounds of data collection were conducted with 77 active users from various industries. The results suggest that by increasing Trust in the e-Government website, Perceptions of Information Quality, Perceptions of System Quality, and Perceptions of Service Quality will influence System Usage and User Satisfaction, and will ultimately have consequences for the Perceived Net Benefits.",
"title": ""
},
{
"docid": "51f4b288d0c902e083a0eede6f342ba2",
"text": "Transactional memory (TM) is a promising synchronization mechanism for the next generation of multicore processors. Best-effort Hardware Transactional Memory (HTM) designs, such as Sun's prototype Rock processor and AMD's proposed Advanced Synchronization Facility (ASF), can efficiently execute many transactions, but abort in some cases due to various limitations. Hybrid TM systems can use a compatible software TM (STM) in such cases.\n We introduce a family of hybrid TMs built using the recent NOrec STM algorithm that, unlike existing hybrid approaches, provide both low overhead on hardware transactions and concurrent execution of hardware and software transactions. We evaluate implementations for Rock and ASF, exploring how the differing HTM designs affect optimization choices. Our investigation yields valuable input for designers of future best-effort HTMs.",
"title": ""
},
{
"docid": "365b95202095942c4b2b43a5e6f6e04e",
"text": "Abstract. In this paper we use the contraction mapping theorem to obtain asymptotic stability results of the zero solution of a nonlinear neutral Volterra integro-differential equation with variable delays. Some conditions which allow the coefficient functions to change sign and do not ask the boundedness of delays are given. An asymptotic stability theorem with a necessary and sufficient condition is proved, which improve and extend the results in the literature. Two examples are also given to illustrate this work.",
"title": ""
},
{
"docid": "8d9d2bc18bede24fede2e3d14b0e7f87",
"text": "Artificial Neural Network (ANN) forms a useful tool in pattern recognition tasks. Collection of five, eight or more cards in a cards game are normally called poker hands. There are various poker variations, each with different poker hands ranking. In the present paper, an attempt is made to solve poker hand classification problem using different learning paradigms and architectures of artificial neural network: multi-layer feed-forward Backpropagation (supervised) and self-organizing map (un-supervised). Poker data set is touted to be a difficult dataset for classification algorithms. Experimental results are presented to demonstrate the performance of the proposed system. The paper also aims to suggest about training algorithms and training parameters that must be chosen in order to solve poker hand classification problem using neural network model. As neural networks are the most convenient tools for handling complicated data sets with real values, one of the most important objectives of the paper is to explain how a neural network can also be used successfully for classification kind of problems involving categorical attributes. The proposed model succeeded in classification of poker hands with 94% classification accuracy.",
"title": ""
},
{
"docid": "b163fb3faa31f6db35599d32d7946523",
"text": "Humans learn how to behave directly through environmental experience and indirectly through rules and instructions. Behavior analytic research has shown that instructions can control behavior, even when such behavior leads to sub-optimal outcomes (Hayes, S. (Ed.). 1989. Rule-governed behavior: cognition, contingencies, and instructional control. Plenum Press.). Here we examine the control of behavior through instructions in a reinforcement learning task known to depend on striatal dopaminergic function. Participants selected between probabilistically reinforced stimuli, and were (incorrectly) told that a specific stimulus had the highest (or lowest) reinforcement probability. Despite experience to the contrary, instructions drove choice behavior. We present neural network simulations that capture the interactions between instruction-driven and reinforcement-driven behavior via two potential neural circuits: one in which the striatum is inaccurately trained by instruction representations coming from prefrontal cortex/hippocampus (PFC/HC), and another in which the striatum learns the environmentally based reinforcement contingencies, but is \"overridden\" at decision output. Both models capture the core behavioral phenomena but, because they differ fundamentally on what is learned, make distinct predictions for subsequent behavioral and neuroimaging experiments. Finally, we attempt to distinguish between the proposed computational mechanisms governing instructed behavior by fitting a series of abstract \"Q-learning\" and Bayesian models to subject data. The best-fitting model supports one of the neural models, suggesting the existence of a \"confirmation bias\" in which the PFC/HC system trains the reinforcement system by amplifying outcomes that are consistent with instructions while diminishing inconsistent outcomes.",
"title": ""
},
{
"docid": "84d8ff8724df86ce100ddfbb150e7446",
"text": "Adaptive Gaussian mixtures have been used for modeling nonstationary temporal distributions of pixels in video surveillance applications. However, a common problem for this approach is balancing between model convergence speed and stability. This paper proposes an effective scheme to improve the convergence rate without compromising model stability. This is achieved by replacing the global, static retention factor with an adaptive learning rate calculated for each Gaussian at every frame. Significant improvements are shown on both synthetic and real video data. Incorporating this algorithm into a statistical framework for background subtraction leads to an improved segmentation performance compared to a standard method.",
"title": ""
},
{
"docid": "d92e0e7ff8d0dabcac5b773d361a26a3",
"text": "Several studies on brain Magnetic Resonance Images (MRI) show relations between neuroanatomical abnormalities of brain structures and neurological disorders, such as Attention Defficit Hyperactivity Disorder (ADHD) and Alzheimer. These abnormalities seem to be correlated with the size and shape of these structures, and there is an active field of research trying to find accurate methods for automatic MRI segmentation. In this project, we study the automatic segmentation of structures from the Basal Ganglia and we propose a new methodology based on Stacked Sparse Autoencoders (SSAE). SSAE is a strategy that belongs to the family of Deep Machine Learning and consists on a supervised learning method based on an unsupervisely pretrained Feed-forward Neural Network. Moreover, we present two approaches based on 2D and 3D features of the images. We compare the results obtained on the different regions of interest with those achieved by other machine learning techniques such as Neural Networks and Support Vector Machines. We observed that in most cases SSAE improves those other methods. We demonstrate that the 3D features do not report better results than the 2D ones as could be thought. Furthermore, we show that SSAE provides state-of-the-art Dice Coefficient results (left, right): Caudate (90.63±1.4, 90.31±1.7), Putamen (91.03±1.4, 90.82±1.4), Pallidus (85.11±1.8, 83.47±2.2), Accumbens (74.26±4.4, 74.46±4.6).",
"title": ""
},
{
"docid": "7d7e4ddaa9c582c28e9186036fc0a375",
"text": "It has become common to distribute software in forms that are isomorphic to the original source code. An important example is Java bytecode. Since such codes are easy to decompile, they increase the risk of malicious reverse engineering attacks.In this paper we describe the design of a Java code obfuscator, a tool which - through the application of code transformations - converts a Java program into an equivalent one that is more difficult to reverse engineer.We describe a number of transformations which obfuscate control-flow. Transformations are evaluated with respect to potency (To what degree is a human reader confused?), resilience (How well are automatic deobfuscation attacks resisted?), cost (How much time/space overhead is added?), and stealth (How well does obfuscated code blend in with the original code?).The resilience of many control-altering transformations rely on the resilience of opaque predicates. These are boolean valued expressions whose values are known to the obfuscator but difficult to determine for an automatic deobfuscator. We show how to construct resilient, cheap, and stealthy opaque predicates based on the intractability of certain static analysis problems such as alias analysis.",
"title": ""
},
{
"docid": "43abb5eadd40c7e5e5d13c7ff33da9d7",
"text": "Roll-to-Roll (R2R) production of thin film based display components (e.g., active matrix TFT backplanes and touch screens) combine the advantages of the use > of inexpensive, lightweight, and flexible substrates with high throughput production. Significant cost reduction opportunities can also be found in terms of processing tool capital cost, utilized substrate area, and process gas flow when compared with batch processing systems. Applied Materials has developed a variety of different web handling and coating technologies/platforms to enable high volume R2R manufacture of thin film silicon solar cells, TFT active matrix backplanes, touch screen devices, and ultra-high barriers for organic electronics. The work presented in this chapter therefore describes the latest advances in R2R PVD processing and principal challenges inherent in moving from lab and pilot scale manufacturing to high volume manufacturing of flexible display devices using CVD for the deposition of active semiconductors layers, gate insulators, and high performance barrier/passivation layers. This chapter also includes brief description of the process and cost advantage of the use of rotatable PVD source technologies (primarily for use in flexible touch panel manufacture) and a summary of the current performance levels obtained for R2R processed amorphous silicon and IGZO TFT backplanes. Results will also be presented for barrier film for final device/frontplane encapsulation for display applications.",
"title": ""
},
{
"docid": "e2fb4ed617cffabba2f28b95b80a30b3",
"text": "The importance of information security education, information security training, and information security awareness in organisations cannot be overemphasised. This paper presents working definitions for information security education, information security training and information security awareness. An investigation to determine if any differences exist between information security education, information security training and information security awareness was conducted. This was done to help institutions understand when they need to train or educate employees and when to introduce information security awareness programmes. A conceptual analysis based on the existing literature was used for proposing working definitions, which can be used as a reference point for future information security researchers. Three important attributes (namely focus, purpose and method) were identified as the distinguishing characteristics of information security education, information security training and information security awareness. It was found that these information security concepts are different in terms of their focus, purpose and methods of delivery.",
"title": ""
},
{
"docid": "24e0fb7247644ba6324de9c86fdfeb12",
"text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.",
"title": ""
},
{
"docid": "cf6138d5af2946363188a3696cc2b7c0",
"text": "The Rational Unified Process® (RUP®) is a software engineering process framework. It captures many of the best practices in modern software development in a form that is suitable for a wide range of projects and organizations. It embeds object-oriented techniques and uses the UML as the principal notation for the several models that are built during the development. The RUP is also an open process framework that allows software organizations to tailor the process to their specific need, and to capture their own specific process know-how in the form of process components. Many process components are now developed by various organizations to cover different domains, technologies, tools, or type of development, and these components can be assembled to rapidly compose a suitable process. This tutorial will introduce the basic concepts and principles, which lie under the RUP framework, and show concrete examples of its usage.",
"title": ""
},
{
"docid": "c2a3344c607cf06c24ed8d2664243284",
"text": "It is common for cloud users to require clusters of inter-connected virtual machines (VMs) in a geo-distributed IaaS cloud, to run their services. Compared to isolated VMs, key challenges on dynamic virtual cluster (VC) provisioning (computation + communication resources) lie in two folds: (1) optimal placement of VCs and inter-VM traffic routing involve NP-hard problems, which are non-trivial to solve offline, not to mention if an online efficient algorithm is sought; (2) an efficient pricing mechanism is missing, which charges a market-driven price for each VC as a whole upon request, while maximizing system efficiency or provider revenue over the entire span. This paper proposes efficient online auction mechanisms to address the above challenges. We first design SWMOA, a novel online algorithm for dynamic VC provisioning and pricing, achieving truthfulness, individual rationality, computation efficiency, and <inline-formula><tex-math notation=\"LaTeX\">$(1+2\\log \\mu)$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq1-2601905.gif\"/></alternatives></inline-formula>-competitiveness in social welfare, where <inline-formula><tex-math notation=\"LaTeX\">$\\mu$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2601905.gif\"/></alternatives></inline-formula> is related to the problem size. Next, applying a randomized reduction technique, we convert the social welfare maximizing auction into a revenue maximizing online auction, PRMOA, achieving <inline-formula><tex-math notation=\"LaTeX\">$O(\\log \\mu)$ </tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq3-2601905.gif\"/></alternatives></inline-formula> -competitiveness in provider revenue, as well as truthfulness, individual rationality and computation efficiency. We investigate auction design in different cases of resource cost functions in the system. We validate the efficacy of the mechanisms through solid theoretical analysis and trace-driven simulations.",
"title": ""
},
{
"docid": "f3e382102c57e9d8f5349e374d1e6907",
"text": "In SCARA robots, which are often used in industrial applications, all joint axes are parallel, covering three degrees of freedom in translation and one degree of freedom in rotation. Therefore, conventional approaches for the handeye calibration of articulated robots cannot be used for SCARA robots. In this paper, we present a new linear method that is based on dual quaternions and extends the work of [1] for SCARA robots. To improve the accuracy, a subsequent nonlinear optimization is proposed. We address several practical implementation issues and show the effectiveness of the method by evaluating it on synthetic and real data.",
"title": ""
},
{
"docid": "bbd378407abb1c2a9a5016afee40c385",
"text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.",
"title": ""
},
{
"docid": "1e7721225d84896a72f2ea790570ecbd",
"text": "We have developed a Blumlein line pulse generator which utilizes the superposition of electrical pulses launched from two individually switched pulse forming lines. By using a fast power MOSFET as a switch on each end of the Blumlein line, we were able to generate pulses with amplitudes of 1 kV across a 100-Omega load. Pulse duration and polarity can be controlled by the temporal delay in the triggering of the two switches. In addition, the use of identical switches allows us to overcome pulse distortions arising from the use of non-ideal switches in the traditional Blumlein configuration. With this pulse generator, pulses with durations between 8 and 300 ns were applied to Jurkat cells (a leukemia cell line) to investigate the pulse dependent increase in calcium levels. The development of the calcium levels in individual cells was studied by spinning-disc confocal fluorescent microscopy with the calcium indicator, fluo-4. With this fast imaging system, fluorescence changes, representing calcium mobilization, could be resolved with an exposure of 5 ms every 18 ms. For a 60-ns pulse duration, each rise in intracellular calcium was greater as the electric field strength was increased from 25 kV/cm to 100 kV/cm. Only for the highest electric field strength is the response dependent on the presence of extracellular calcium. The results complement ion-exchange mechanisms previously observed during the charging of cellular membranes, which were suggested by observations of membrane potential changes during exposure.",
"title": ""
}
] | scidocsrr |
90d757d40e80bc376b4bcaef82a8a6e3 | Neural Discourse Modeling of Conversations | [
{
"docid": "64330f538b3d8914cbfe37565ab0d648",
"text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.",
"title": ""
}
] | [
{
"docid": "60f31d60213abe65faec3eb69edb1eea",
"text": "In this paper, a novel multi-layer four-way out-of-phase power divider based on substrate integrated waveguide (SIW) is proposed. The four-way power division is realized by 3-D mode coupling; vertical partitioning of a SIW followed by lateral coupling to two half-mode SIW. The measurement results show the excellent insertion loss (S<inf>21</inf>, S<inf>31</inf>, S<inf>41</inf>, S<inf>51</inf>: −7.0 ± 0.5 dB) and input return loss (S<inf>11</inf>: −10 dB) in X-band (7.63 GHz ∼ 11.12 GHz). We expect that the proposed power divider play an important role for the integration of compact multi-way SIW circuits.",
"title": ""
},
{
"docid": "f79f807b8b3f6516a14eaea37f9de82c",
"text": "Negative consumer opinion poses a potential barrier to the application of nutrigenomic intervention. The present study has aimed to determine attitudes toward genetic testing and personalised nutrition among the European public. An omnibus opinion survey of a representative sample aged 14-55+ years (n 5967) took place in France, Italy, Great Britain, Portugal, Poland and Germany during June 2005 as part of the Lipgene project. A majority of respondents (66 %) reported that they would be willing to undergo genetic testing and 27 % to follow a personalised diet. Individuals who indicated a willingness to have a genetic test for the personalising of their diets were more likely to report a history of high blood cholesterol levels, central obesity and/or high levels of stress than those who would have a test only for general interest. Those who indicated that they would not have a genetic test were more likely to be male and less likely to report having central obesity. Individuals with a history of high blood cholesterol were less likely than those who did not to worry if intervention foods contained GM ingredients. Individuals who were aware that they had health problems associated with the metabolic syndrome appeared particularly favourable toward nutrigenomic intervention. These findings are encouraging for the future application of personalised nutrition provided that policies are put in place to address public concern about how genetic information is used and held.",
"title": ""
},
{
"docid": "41098050e76786afbb892d4cd1ffaad2",
"text": "Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically-based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments.",
"title": ""
},
{
"docid": "c9750e95b3bd422f0f5e73cf6c465b35",
"text": "Lingual nerve damage complicating oral surgery would sometimes require electrographic exploration. Nevertheless, direct recording of conduction in lingual nerve requires its puncture at the foramen ovale. This method is too dangerous to be practiced routinely in these diagnostic indications. The aim of our study was to assess spatial relationships between lingual nerve and mandibular ramus in the infratemporal fossa using an original technique. Therefore, ten lingual nerves were dissected on five fresh cadavers. All the nerves were catheterized with a 3/0 wire. After meticulous repositioning of the nerve and medial pterygoid muscle reinsertion, CT-scan examinations were performed with planar acquisitions and three-dimensional reconstructions. Localization of lingual nerve in the infratemporal fossa was assessed successively at the level of the sigmoid notch of the mandible, lingula and third molar. At the level of the lingula, lingual nerve was far from the maxillary vessels; mean distance between the nerve and the anterior border of the ramus was 19.6 mm. The posteriorly opened angle between the medial side of the ramus and the line joining the lingual nerve and the anterior border of the ramus measured 17°. According to these findings, we suggest that the lingual nerve might be reached through the intra-oral puncture at the intermaxillary commissure; therefore, we modify the inferior alveolar nerve block technique to propose a safe and reproducible protocol likely to be performed routinely as electrographic exploration of the lingual nerve. What is more, this original study protocol provided interesting educational materials and could be developed for the conception of realistic 3D virtual anatomy supports.",
"title": ""
},
{
"docid": "5ed98c54020d4e5c7b0fa8b66436f3e1",
"text": "An important issue in the design of a mobile computing system is how to manage the location information of mobile clients. In the existing commercial cellular mobile computing systems, a twotier architecture is adopted (Mouly and Pautet, 1992). However, the two-tier architecture is not scalable. In the literatures (Pitoura and Samaras, 2001; Pitoura and Fudos, 1998), a hierarchical database structure is proposed in which the location information of mobile clients within a cell is managed by the location database responsible for the cell. The location databases of different cells are organized into a tree-like structure to facilitate the search of mobile clients. Although this architecture can distribute the updates and the searching workload amongst the location databases in the system, location update overheads can be very expensive when the mobility of clients is high. In this paper, we study the issues on how to generate location updates under the distance-based method for systems using hierarchical location databases. A cost-based method is proposed for calculating the optimal distance threshold with the objective to minimize the total location management cost. Furthermore, under the existing hierarchical location database scheme, the tree structure of the location databases is static. It cannot adapt to the changes in mobility patterns of mobile clients. This will affect the total location management cost in the system. In the second part of the paper, we present a re-organization strategy to re-structure the hierarchical tree of location databases according to the mobility patterns of the clients with the objective to minimize the location management cost. Extensive simulation experiments have been performed to investigate the re-organization strategy when our location update generation method is applied.",
"title": ""
},
{
"docid": "5d43586ebd66c6fc09683558536b89e9",
"text": "In this paper, we present an overview of UHF RFID tag performance characterization. We review the link budget of RFID system, explain different tag performance characteristics, and describe various testing methods. We also review state-of-the art test systems present on the market today.",
"title": ""
},
{
"docid": "41b1a0c362c7bdb77b7dbcc20adcd532",
"text": "Augmented reality involves the use of models and their associated renderings to supplement information in a real scene. In order for this information to be relevant or meaningful, the models must be positioned and displayed in such a way that they align with their corresponding real objects. For practical reasons this alignment cannot be known a priori, and cannot be hard-wired into a system. Instead a simple, reliable alignment or calibration process is performed so that computer models can be accurately registered with their real-life counterparts. We describe the design and implementation of such a process and we show how it can be used to create convincing interactions between real and virtual objects.",
"title": ""
},
{
"docid": "b50d85f1993525c01370b9d90063e135",
"text": "This paper is aimed at analyzing the behavior of a packed bed latent heat thermal energy storage system. The packed bed is composed of spherical capsules filled with paraffin wax as PCM usable with a solar water heating system. The model developed in this study uses the fundamental equations similar to those of Schumann, except that the phase change phenomena of PCM inside the capsules are analyzed by using enthalpy method. The equations are numerically solved, and the results obtained are used for the thermal performance analysis of both charging and discharging processes. The effects of the inlet heat transfer fluid temperature (Stefan number), mass flow rate and phase change temperature range on the thermal performance of the capsules of various radii have been investigated. The results indicate that for the proper modeling of performance of the system the phase change temperature range of the PCM must be accurately known, and should be taken into account. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4eb37f87312ce521c30858f6a97edd59",
"text": "We propose an automatic framework for quality assessment of a photograph as well as analysis of its aesthetic attributes. In contrast to the previous methods that rely on manually designed features to account for photo aesthetics, our method automatically extracts such features using a pretrained deep convolutional neural network (DCNN). To make the DCNN-extracted features more suited to our target tasks of photo quality assessment and aesthetic attribute analysis, we propose a novel feature encoding scheme, which supports vector machines-driven sparse restricted Boltzmann machines, which enhances sparseness of features and discrimination between target classes. Experimental results show that our method outperforms the current state-of-the-art methods in automatic photo quality assessment, and gives aesthetic attribute ratings that can be used for photo editing. We demonstrate that our feature encoding scheme can also be applied to general object classification task to achieve performance gains.",
"title": ""
},
{
"docid": "f1325dd1350acf612dc1817db693a3d6",
"text": "Software for the measurement of genetic diversity (SMOGD) is a web-based application for the calculation of the recently proposed genetic diversity indices G'(ST) and D(est) . SMOGD includes bootstrapping functionality for estimating the variance, standard error and confidence intervals of estimated parameters, and SMOGD also generates genetic distance matrices from pairwise comparisons between populations. SMOGD accepts standard, multilocus Genepop and Arlequin formatted input files and produces HTML and tab-delimited output. This allows easy data submission, quick visualization, and rapid import of results into spreadsheet or database programs.",
"title": ""
},
{
"docid": "f054e4464f2ef68ad9127afe00108b9a",
"text": "RFID systems often use near-field magnetic coupling to implement communication channels. The advertised operational range of these channels is less than 10 cm and therefore several implemented systems assume that the communication channel is location limited and therefore relatively secure. Nevertheless, there have been repeated questions raised about the vulnerability of these near-field systems against eavesdropping and skimming attacks. In this paper we revisit the topic of RFID eavesdropping and skimming attacks, surveying previous work and explaining why the feasibility of practical attacks is still a relevant and novel research topic. We present a brief overview of the radio characteristics for popular HF RFID standards and present some practical results for eavesdropping experiments against tokens adhering to the ISO 14443 and ISO 15693 standards. We also discuss how an attacker could construct a low-cost eavesdropping device using easy to obtain parts and reference designs. Finally, we present results for skimming experiments against ISO 14443 tokens.",
"title": ""
},
{
"docid": "c300043b5546f8ca75070aa66d05a1d3",
"text": "In recent years high-speed electromagnetic repulsion mechanism (ERM), which is produced based on the eddy-current effect, has been widely applied to vacuum circuit breakers. One of the challenges for the design of ERM is to improve the speed of ERM by optimization design. In this paper, a novel co-simulation model is proposed. The related electromagnetic field, mechanical field and structural field of ERM have been analyzed through co-simulation. Results show that achieving separation in a few milliseconds is possible. Besides, considering the metal plate, as a driver of frequent operation, is possible to reach its mechanic limit, the stress and the strain of it are analyzed. In addition, according to the parametric analysis, the relationship between improving speed and optimizing parameters, e.g., the turns of the coil, the structural size, the storing form of energy and the initial gap are investigated.",
"title": ""
},
{
"docid": "cd25829b5e42a77485ceefd18b682410",
"text": "Members of the Fleischner Society compiled a glossary of terms for thoracic imaging that replaces previous glossaries published in 1984 and 1996 for thoracic radiography and computed tomography (CT), respectively. The need to update the previous versions came from the recognition that new words have emerged, others have become obsolete, and the meaning of some terms has changed. Brief descriptions of some diseases are included, and pictorial examples (chest radiographs and CT scans) are provided for the majority of terms.",
"title": ""
},
{
"docid": "b89259a915856b309a02e6e7aa6c957f",
"text": "The paper proposes a comprehensive information security maturity model (ISMM) that addresses both technical and socio/non-technical security aspects. The model is intended for securing e-government services (implementation and service delivery) in an emerging and increasing security risk environment. The paper utilizes extensive literature review and survey study approaches. A total of eight existing ISMMs were selected and critically analyzed. Models were then categorized into security awareness, evaluation and management orientations. Based on the model’s strengths – three models were selected to undergo further analyses and then synthesized. Each of the three selected models was either from the security awareness, evaluation or management orientations category. To affirm the findings – a survey study was conducted into six government organizations located in Tanzania. The study was structured to a large extent by the security controls adopted from the Security By Consensus (SBC) model. Finally, an ISMM with five critical maturity levels was proposed. The maturity levels were: undefined, defined, managed, controlled and optimized. The papers main contribution is the proposed model that addresses both technical and non-technical security services within the critical maturity levels. Additionally, the paper enhances awareness and understanding on the needs for security in e-government services to stakeholders.",
"title": ""
},
{
"docid": "409d104fa3e992ac72c65b004beaa963",
"text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.",
"title": ""
},
{
"docid": "78835f284b953e50c55c31d49695701f",
"text": "The Named-Data Networking (NDN) has emerged as a clean-slate Internet proposal on the wave of Information-Centric Networking. Although the NDN's data-plane seems to offer many advantages, e.g., native support for multicast communications and flow balance, it also makes the network infrastructure vulnerable to a specific DDoS attack, the Interest Flooding Attack (IFA). In IFAs, a botnet issuing unsatisfiable content requests can be set up effortlessly to exhaust routers' resources and cause a severe performance drop to legitimate users. So far several countermeasures have addressed this security threat, however, their efficacy was proved by means of simplistic assumptions on the attack model. Therefore, we propose a more complete attack model and design an advanced IFA. We show the efficiency of our novel attack scheme by extensively assessing some of the state-of-the-art countermeasures. Further, we release the software to perform this attack as open source tool to help design future more robust defense mechanisms.",
"title": ""
},
{
"docid": "ccbed79c4f8504594f37303beb6e9e0b",
"text": "Recent attacks on Bitcoin’s peer-to-peer (P2P) network demonstrated that its transaction-flooding protocols, which are used to ensure network consistency, may enable user deanonymization—the linkage of a user’s IP address with her pseudonym in the Bitcoin network. In 2015, the Bitcoin community responded to these attacks by changing the network’s flooding mechanism to a different protocol, known as diffusion. However, it is unclear if diffusion actually improves the system’s anonymity. In this paper, we model the Bitcoin networking stack and analyze its anonymity properties, both preand post-2015. The core problem is one of epidemic source inference over graphs, where the observational model and spreading mechanisms are informed by Bitcoin’s implementation; notably, these models have not been studied in the epidemic source detection literature before. We identify and analyze near-optimal source estimators. This analysis suggests that Bitcoin’s networking protocols (both preand post-2015) offer poor anonymity properties on networks with a regular-tree topology. We confirm this claim in simulation on a 2015 snapshot of the real Bitcoin P2P network topology.",
"title": ""
},
{
"docid": "39ed08e9a08b7d71a4c177afe8f0056a",
"text": "This paper proposes an anticipation model of potential customers’ purchasing behavior. This model is inferred from past purchasing behavior of loyal customers and the web server log files of loyal and potential customers by means of clustering analysis and association rules analysis. Clustering analysis collects key characteristics of loyal customers’ personal information; these are used to locate other potential customers. Association rules analysis extracts knowledge of loyal customers’ purchasing behavior, which is used to detect potential customers’ near-future interest in a star product. Despite using offline analysis to filter out potential customers based on loyal customers’ personal information and generate rules of loyal customers’ click streams based on loyal customers’ web log data, an online analysis which observes potential customers’ web logs and compares it with loyal customers’ click stream rules can more readily target potential customers who may be interested in the star products in the near future. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "61d02f77b270a24756b2ab2164ece5d0",
"text": "The transdifferentiation of epithelial cells into motile mesenchymal cells, a process known as epithelial–mesenchymal transition (EMT), is integral in development, wound healing and stem cell behaviour, and contributes pathologically to fibrosis and cancer progression. This switch in cell differentiation and behaviour is mediated by key transcription factors, including SNAIL, zinc-finger E-box-binding (ZEB) and basic helix–loop–helix transcription factors, the functions of which are finely regulated at the transcriptional, translational and post-translational levels. The reprogramming of gene expression during EMT, as well as non-transcriptional changes, are initiated and controlled by signalling pathways that respond to extracellular cues. Among these, transforming growth factor-β (TGFβ) family signalling has a predominant role; however, the convergence of signalling pathways is essential for EMT.",
"title": ""
},
{
"docid": "5b131fbca259f07bd1d84d4f61761903",
"text": "We aimed to identify a blood flow restriction (BFR) endurance exercise protocol that would both maximize cardiopulmonary and metabolic strain, and minimize the perception of effort. Twelve healthy males (23 ± 2 years, 75 ± 7 kg) performed five different exercise protocols in randomized order: HI, high-intensity exercise starting at 105% of the incremental peak power (P peak); I-BFR30, intermittent BFR at 30% P peak; C-BFR30, continuous BFR at 30% P peak; CON30, control exercise without BFR at 30% P peak; I-BFR0, intermittent BFR during unloaded exercise. Cardiopulmonary, gastrocnemius oxygenation (StO2), capillary lactate ([La]), and perceived exertion (RPE) were measured. V̇O2, ventilation (V̇ E), heart rate (HR), [La] and RPE were greater in HI than all other protocols. However, muscle StO2 was not different between HI (set1—57.8 ± 5.8; set2—58.1 ± 7.2%) and I-BRF30 (set1—59.4 ± 4.1; set2—60.5 ± 6.6%, p < 0.05). While physiologic responses were mostly similar between I-BFR30 and C-BFR30, [La] was greater in I-BFR30 (4.2 ± 1.1 vs. 2.6 ± 1.1 mmol L−1, p = 0.014) and RPE was less (5.6 ± 2.1 and 7.4 ± 2.6; p = 0.014). I-BFR30 showed similar reduced muscle StO2 compared with HI, and increased blood lactate compared to C-BFR30 exercise. Therefore, this study demonstrate that endurance cycling with intermittent BFR promotes muscle deoxygenation and metabolic strain, which may translate into increased endurance training adaptations while minimizing power output and RPE.",
"title": ""
}
] | scidocsrr |
bd404c364c2400990168678acf70ae6f | Change-Point Detection in Time-Series Data Based on Subspace Identification | [
{
"docid": "dca74df16e3a90726d51b3222483ac94",
"text": "We are concerned with the issue of detecting outliers and change points from time series. In the area of data mining, there have been increased interest in these issues since outlier detection is related to fraud detection, rare event discovery, etc., while change-point detection is related to event/trend change detection, activity monitoring, etc. Although, in most previous work, outlier detection and change point detection have not been related explicitly, this paper presents a unifying framework for dealing with both of them. In this framework, a probabilistic model of time series is incrementally learned using an online discounting learning algorithm, which can track a drifting data source adaptively by forgetting out-of-date statistics gradually. A score for any given data is calculated in terms of its deviation from the learned model, with a higher score indicating a high possibility of being an outlier. By taking an average of the scores over a window of a fixed length and sliding the window, we may obtain a new time series consisting of moving-averaged scores. Change point detection is then reduced to the issue of detecting outliers in that time series. We compare the performance of our framework with those of conventional methods to demonstrate its validity through simulation and experimental applications to incidents detection in network security.",
"title": ""
},
{
"docid": "0d41a6d4cf8c42ccf58bccd232a46543",
"text": "Novelty detection is the ident ification of new or unknown data or signal that a machine learning system is not aware of during training. In this paper we focus on neural network based approaches for novelty detection. Statistical approaches are covered in part-I paper.",
"title": ""
}
] | [
{
"docid": "3dcb93232121be1ff8a2d96ecb25bbdd",
"text": "We describe the approach that won the preliminary phase of the German traffic sign recognition benchmark with a better-than-human recognition rate of 98.98%.We obtain an even better recognition rate of 99.15% by further training the nets. Our fast, fully parameterizable GPU implementation of a Convolutional Neural Network does not require careful design of pre-wired feature extractors, which are rather learned in a supervised way. A CNN/MLP committee further boosts recognition performance.",
"title": ""
},
{
"docid": "c8ba829a6b0e158d1945bbb0ed68045b",
"text": "Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance. Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better. Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.",
"title": ""
},
{
"docid": "391cce3ac9ab87e31203637d89a8a082",
"text": "MicroRNAs (miRNAs) are small conserved non-coding RNA molecules that post-transcriptionally regulate gene expression by targeting the 3' untranslated region (UTR) of specific messenger RNAs (mRNAs) for degradation or translational repression. miRNA-mediated gene regulation is critical for normal cellular functions such as the cell cycle, differentiation, and apoptosis, and as much as one-third of human mRNAs may be miRNA targets. Emerging evidence has demonstrated that miRNAs play a vital role in the regulation of immunological functions and the prevention of autoimmunity. Here we review the many newly discovered roles of miRNA regulation in immune functions and in the development of autoimmunity and autoimmune disease. Specifically, we discuss the involvement of miRNA regulation in innate and adaptive immune responses, immune cell development, T regulatory cell stability and function, and differential miRNA expression in rheumatoid arthritis and systemic lupus erythematosus.",
"title": ""
},
{
"docid": "802d66fda1701252d1addbd6d23f6b4c",
"text": "Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8cd701723c72b16dfe7d321cb657ee31",
"text": "A coupled-inductor double-boost inverter (CIDBI) is proposed for microinverter photovoltaic (PV) module system, and the control strategy applied to it is analyzed. Also, the operation principle of the proposed inverter is discussed and the gain from dc to ac is deduced in detail. The main attribute of the CIDBI topology is the fact that it generates an ac output voltage larger than the dc input one, depending on the instantaneous duty cycle and turns ratio of the coupled inductor as well. This paper points out that the gain is proportional to the duty cycle approximately when the duty cycle is around 0.5 and the synchronized pulsewidth modulation can be applicable to this novel inverter. Finally, the proposed inverter servers as a grid inverter in the grid-connected PV system and the experimental results show that the CIDBI can implement the single-stage PV-grid-connected power generation competently and be of small volume and high efficiency by leaving out the transformer or the additional dc-dc converter.",
"title": ""
},
{
"docid": "f3811a34b2abd34d20e24e90ab9fe046",
"text": "Recently, the development of neural machine translation (NMT) has significantly improved the translation quality of automatic machine translation. While most sentences are more accurate and fluent than translations by statistical machine translation (SMT)-based systems, in some cases, the NMT system produces translations that have a completely different meaning. This is especially the case when rare words occur. When using statistical machine translation, it has already been shown that significant gains can be achieved by simplifying the input in a preprocessing step. A commonly used example is the pre-reordering approach. In this work, we used phrase-based machine translation to pre-translate the input into the target language. Then a neural machine translation system generates the final hypothesis using the pre-translation. Thereby, we use either only the output of the phrase-based machine translation (PBMT) system or a combination of the PBMT output and the source sentence. We evaluate the technique on the English to German translation task. Using this approach we are able to outperform the PBMT system as well as the baseline neural MT system by up to 2 BLEU points. We analyzed the influence of the quality of the initial system on the final result.",
"title": ""
},
{
"docid": "6485211d35cef2766675d78311864ff0",
"text": "In this paper, we investigate architectural and practical issues related to the setup of a broadband home network solution. Our experience led us to the consideration of a hybrid, wireless and wired, Mesh-Network to enable high data rate service delivery everywhere in the home. We demonstrate the effectiveness of our proposal using a real experimental testbed. This latter consists of a multi-hop mesh network composed of a home gateway and \"extenders\" supporting several types of physical connectivity including PLC, WiFi, and Ethernet. The solution also includes a layer 2 implementation of the OLSR protocol for path selection. We developed an extension of this protocol for QoS assurance and to enable the proper execution of existing services. We have also implemented a fast WiFi handover algorithm to ensure service continuity in case of user mobility among the extenders inside the home.",
"title": ""
},
{
"docid": "4ab3db4b0c338dbe8d5bb9e1f49f2a5c",
"text": "BACKGROUND\nSub-Saharan African (SSA) countries are currently experiencing one of the most rapid epidemiological transitions characterized by increasing urbanization and changing lifestyle factors. This has resulted in an increase in the incidence of non-communicable diseases, especially cardiovascular disease (CVD). This double burden of communicable and chronic non-communicable diseases has long-term public health impact as it undermines healthcare systems.\n\n\nPURPOSE\nThe purpose of this paper is to explore the socio-cultural context of CVD risk prevention and treatment in sub-Saharan Africa. We discuss risk factors specific to the SSA context, including poverty, urbanization, developing healthcare systems, traditional healing, lifestyle and socio-cultural factors.\n\n\nMETHODOLOGY\nWe conducted a search on African Journals On-Line, Medline, PubMed, and PsycINFO databases using combinations of the key country/geographic terms, disease and risk factor specific terms such as \"diabetes and Congo\" and \"hypertension and Nigeria\". Research articles on clinical trials were excluded from this overview. Contrarily, articles that reported prevalence and incidence data on CVD risk and/or articles that report on CVD risk-related beliefs and behaviors were included. Both qualitative and quantitative articles were included.\n\n\nRESULTS\nThe epidemic of CVD in SSA is driven by multiple factors working collectively. Lifestyle factors such as diet, exercise and smoking contribute to the increasing rates of CVD in SSA. Some lifestyle factors are considered gendered in that some are salient for women and others for men. For instance, obesity is a predominant risk factor for women compared to men, but smoking still remains mostly a risk factor for men. Additionally, structural and system level issues such as lack of infrastructure for healthcare, urbanization, poverty and lack of government programs also drive this epidemic and hampers proper prevention, surveillance and treatment efforts.\n\n\nCONCLUSION\nUsing an African-centered cultural framework, the PEN3 model, we explore future directions and efforts to address the epidemic of CVD risk in SSA.",
"title": ""
},
{
"docid": "10b0ab2570a7bba1ac1f575a0555eb4a",
"text": "It is well known that ozone concentration depends on air/oxygen input flow rate and power consumed by the ozone chamber. For every chamber, there exists a unique optimum flow rate that results in maximum ozone concentration. If the flow rate is increased (beyond) or decreased (below) from this optimum value, the ozone concentration drops. This paper proposes a technique whereby the concentration can be maintained even if the flow rate increases. The idea is to connect n number of ozone chambers in parallel, with each chamber designed to operate at its optimum point. Aside from delivering high ozone concentration at high flow rate, the proposed system requires only one power supply to drive all these (multiple) chambers simultaneously. In addition, due to its modularity, the system is very flexible, i.e., the number of chambers can be added or removed as demanded by the (output) ozone requirements. This paper outlines the chamber design using mica as dielectric and the determination of its parameters. To verify the concept, three chambers are connected in parallel and driven by a single transformer-less LCL resonant power supply. Moreover, a closed-loop feedback controller is implemented to ensure that the voltage gain remains at the designated value even if the number of chambers is changed or there is a variation in the components. It is shown that the flow rate can be increased linearly with the number of chambers while maintaining a constant ozone concentration.",
"title": ""
},
{
"docid": "e0382c9d739281b4bc78f4a69827ac37",
"text": "Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, both LBR and Super-Parent TAN have demonstrated remarkable error performance. However, both techniques obtain this outcome at a considerable computational cost. We present a new approach to weakening the attribute independence assumption by averaging all of a constrained class of classifiers. In extensive experiments this technique delivers comparable prediction accuracy to LBR and Super-Parent TAN with substantially improved computational efficiency at test time relative to the former and at training time relative to the latter. The new algorithm is shown to have low variance and is suited to incremental learning.",
"title": ""
},
{
"docid": "81d07b747f12f10066571c784e212991",
"text": "This work presents a bi-arm rolled monopole for ultrawide-band (UWB) applications. The roll monopole is constructed by wrapping a planar monopole. The impedance and radiation characteristics of the proposed roll monopole are experimentally compared with a rectangular planar monopole and strip monopole. Furthermore, the transfer responses of transmit-receive antenna systems comprising two identical monopoles are examined across the UWB band. The characteristics of the monopoles are investigated in both time and frequency domains for UWB single-band and multiple-band schemes. The study shows that the proposed bi-arm rolled monopole is capable of achieving broadband and omnidirectional radiation characteristics within 3.1-10.6 GHz for UWB wireless communications.",
"title": ""
},
{
"docid": "a2d851b76d6abcb3d9377c566b8bf6d9",
"text": "Many fabrication processes for polymeric objects include melt extrusion, in which the molten polymer is conveyed by a ram or a screw and the melt is then forced through a shaping die in continuous processing or into a mold for the manufacture of discrete molded parts. The properties of the fabricated solid object, including morphology developed during cooling and solidification, depend in part on the stresses and orientation induced during the melt shaping. Most polymers used for commercial processing are of sufficiently high molecular weight that the polymer chains are highly entangled in the melt, resulting in flow behavior that differs qualitatively from that of low-molecular-weight liquids. Obvious manifestations of the differences from classical Newtonian fluids are a strongly shear-dependent viscosity and finite stresses normal to the direction of shear in rectilinear flow, transients of the order of seconds for the buildup or relaxation of stresses following a change in shear rate, a finite phase angle between stress and shear rate in oscillatory shear, ratios of extensional to shear viscosities that are considerably greater than 3, and substantial extrudate swell on extrusion from a capillary or slit. These rheological characteristics of molten polymers have been reviewed in textbooks (e.g. Larson 1999, Macosko 1994); the recent research emphasis in rheology has been to establish meaningful constitutive models that incorporate chain behavior at a molecular level. All polymer melts and concentrated solutions exhibit instabilities during extrusion when the stresses to which they are subjected become sufficiently high. The first manifestation of extrusion instability is usually the appearance of distortions on the extrudate surface, sometimes accompanied by oscillating flow. Gross distortion of the extrudate usually follows. The sequence of extrudate distortions",
"title": ""
},
{
"docid": "5a0cf2582fab28fe07d215435632b610",
"text": "5G radio access networks are expected to provide very high capacity, ultra-reliability and low latency, seamless mobility, and ubiquitous end-user experience anywhere and anytime. Driven by such stringent service requirements coupled with the expected dense deployments and diverse use case scenarios, the architecture of 5G New Radio (NR) wireless access has further evolved from the traditionally cell-centric radio access to a more flexible beam-based user-centric radio access. This article provides an overview of the NR system multi-beam operation in terms of initial access procedures and mechanisms associated with synchronization, system information, and random access. We further discuss inter-cell mobility handling in NR and its reliance on new downlink-based measurements to compensate for a lack of always-on reference signals in NR. Furthermore, we describe some of the user-centric coordinated transmission mechanisms envisioned in NR in order to realize seamless intra/inter-cell handover between physical transmission and reception points and reduce the interference levels across the network.",
"title": ""
},
{
"docid": "5e840c5649492d5e93ddef2b94432d5f",
"text": "Commercially available laser lithography systems have been available for several years. One such system manufactured by Heidelberg Instruments can be used to produce masks for lithography or to directly pattern photoresist using either a 3 micron or 1 micron beam. These systems are designed to operate using computer aided design (CAD) mask files, but also have the capability of using images. In image mode, the power of the exposure is based on the intensity of each pixel in the image. This results in individual pixels that are the size of the beam, which establishes the smallest feature that can be patterned. When developed, this produces a range of heights within the photoresist which can then be transferred to the material beneath and used for a variety of applications. Previous research efforts have demonstrated that this process works well overall, but is limited in resolution and feature size due to the pixel approach of the exposure. However, if we modify the method used, much smaller features can be resolved, without the pixilation. This is achieved by utilizing multiple exposures of slightly different CAD type files in sequence. While the smallest beam width is approximately 1 micron, the beam positioning accuracy is much smaller, with 40 nm step changes in beam position based on the machine's servo gearing and optical design. When exposing in CAD mode, the beam travels along lines at constant power, so by automating multiple files in succession, and employing multiple smaller exposures of lower intensity, a similar result can be achieved. With this line exposure approach, pixilation can be greatly reduced. Due to the beam positioning accuracy of this mode, the effective resolution between lines is on the order of 40 nm steps, resulting in unexposed features of much smaller size and higher resolution.",
"title": ""
},
{
"docid": "01ee1036caeb4a64477aa19d0f8a6429",
"text": "In recent years, Twitter has become one of the most important microblogging services of the Web 2.0. Among the possible uses it allows, it can be employed for communicating and broadcasting information in real time. The goal of this research is to analyze the task of automatic tweet generation from a text summarization perspective in the context of the journalism genre. To achieve this, different state-of-the-art summarizers are selected and employed for producing multi-lingual tweets in two languages (English and Spanish). A wide experimental framework is proposed, comprising the creation of a new corpus, the generation of the automatic tweets, and their assessment through a quantitative and a qualitative evaluation, where informativeness, indicativeness and interest are key criteria that should be ensured in the proposed context. From the results obtained, it was observed that although the original tweets were considered as model tweets with respect to their informativeness, they were not among the most interesting ones from a human viewpoint. Therefore, relying only on these tweets may not be the ideal way to communicate news through Twitter, especially if a more personalized and catchy way of reporting news wants to be performed. In contrast, we showed that recent text summarization techniques may be more appropriate, reflecting a balance between indicativeness and interest, even if their content was different from the tweets delivered by the news providers.",
"title": ""
},
{
"docid": "7d860b431f44d42572fc0787bf452575",
"text": "Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.",
"title": ""
},
{
"docid": "6392a6c384613f8ed9630c8676f0cad8",
"text": "References D. Bruckner, J. Rosen, and E. R. Sparks. deepviz: Visualizing convolutional neural networks for image classification. 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research,9(2579-2605):85, 2008. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hods Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer vision–ECCV 2014, pages 818–833. Springer, 2014. Network visualization of ReVACNN",
"title": ""
},
{
"docid": "9e638e09b77463e8c232c7960d49a544",
"text": "Force feedback coupled with visual display allows people to interact intuitively with complex virtual environments. For this synergy of haptics and graphics to flourish, however, haptic systems must be capable of modeling environments with the same richness, complexity and interactivity that can be found in existing graphic systems. To help meet this challenge, we have developed a haptic rendering system that allows f r the efficient tactile display of graphical information. The system uses a common high-level framework to model contact constraints, surface shading, friction and tex ture. The multilevel control system also helps ensure that the haptic device will remain stable even as the limits of the renderer’s capabilities are reached. CR",
"title": ""
}
] | scidocsrr |
ee420aa45778c29e64e40b75baa81a88 | Building Extraction in Very High Resolution Remote Sensing Imagery Using Deep Learning and Guided Filters | [
{
"docid": "477b118d5b7edda21b83d260f9918890",
"text": "Semantic labeling (or pixel-level land-cover classification) in ultrahigh-resolution imagery (<10 cm) requires statistical models able to learn high-level concepts from spatial data, with large appearance variations. Convolutional neural networks (CNNs) achieve this goal by learning discriminatively a hierarchy of representations of increasing abstraction. In this paper, we present a CNN-based system relying on a downsample-then-upsample architecture. Specifically, it first learns a rough spatial map of high-level representations by means of convolutions and then learns to upsample them back to the original resolution by deconvolutions. By doing so, the CNN learns to densely label every pixel at the original resolution of the image. This results in many advantages, including: 1) the state-of-the-art numerical accuracy; 2) the improved geometric accuracy of predictions; and 3) high efficiency at inference time. We test the proposed system on the Vaihingen and Potsdam subdecimeter resolution data sets, involving the semantic labeling of aerial images of 9- and 5-cm resolution, respectively. These data sets are composed by many large and fully annotated tiles, allowing an unbiased evaluation of models making use of spatial information. We do so by comparing two standard CNN architectures with the proposed one: standard patch classification, prediction of local label patches by employing only convolutions, and full patch labeling by employing deconvolutions. All the systems compare favorably or outperform a state-of-the-art baseline relying on superpixels and powerful appearance descriptors. The proposed full patch labeling CNN outperforms these models by a large margin, also showing a very appealing inference time.",
"title": ""
}
] | [
{
"docid": "07de7621bcba13f151b8616f8ef46bb4",
"text": "There is growing evidence that client firms expect outsourcing suppliers to transform their business. Indeed, most outsourcing suppliers have delivered IT operational and business process innovation to client firms; however, achieving strategic innovation through outsourcing has been perceived to be far more challenging. Building on the growing interest in the IS outsourcing literature, this paper seeks to advance our understanding of the role that relational and contractual governance plays in achieving strategic innovation through outsourcing. We hypothesized and tested empirically the relationship between the quality of client-supplier relationships and the likelihood of achieving strategic innovation, and the interaction effect of different contract types, such as fixed-price, time and materials, partnership and their combinations. Results from a pan-European survey of 248 large firms suggest that high-quality relationships between clients and suppliers may indeed help achieve strategic innovation through outsourcing. However, within the spectrum of various outsourcing contracts, only the partnership contract, when included in the client contract portfolio alongside either fixed-price, time and materials or their combination, presents a significant positive effect on relational governance and is likely to strengthen the positive effect of the quality of client-supplier relationships on strategic innovation.",
"title": ""
},
{
"docid": "88c5bcaa173584042939f9b879aa5b3d",
"text": "We present the old-but–new problem of data quality from a statistical perspective, in part with the goal of attracting more statisticians, especially academics, to become engaged in research on a rich set of exciting challenges. The data quality landscape is described, and its research foundations in computer science, total quality management and statistics are reviewed. Two case studies based on an EDA approach to data quality are used to motivate a set of research challenges for statistics that span theory, methodology and software tools.",
"title": ""
},
{
"docid": "351bacafe348cf235dc24e2925e71992",
"text": "Dengue, chikungunya, and Zika virus epidemics transmitted by Aedes aegypti mosquitoes have recently (re)emerged and spread throughout the Americas, Southeast Asia, the Pacific Islands, and elsewhere. Understanding how environmental conditions affect epidemic dynamics is critical for predicting and responding to the geographic and seasonal spread of disease. Specifically, we lack a mechanistic understanding of how seasonal variation in temperature affects epidemic magnitude and duration. Here, we develop a dynamic disease transmission model for dengue virus and Aedes aegypti mosquitoes that integrates mechanistic, empirically parameterized, and independently validated mosquito and virus trait thermal responses under seasonally varying temperatures. We examine the influence of seasonal temperature mean, variation, and temperature at the start of the epidemic on disease dynamics. We find that at both constant and seasonally varying temperatures, warmer temperatures at the start of epidemics promote more rapid epidemics due to faster burnout of the susceptible population. By contrast, intermediate temperatures (24-25°C) at epidemic onset produced the largest epidemics in both constant and seasonally varying temperature regimes. When seasonal temperature variation was low, 25-35°C annual average temperatures produced the largest epidemics, but this range shifted to cooler temperatures as seasonal temperature variation increased (analogous to previous results for diurnal temperature variation). Tropical and sub-tropical cities such as Rio de Janeiro, Fortaleza, and Salvador, Brazil; Cali, Cartagena, and Barranquilla, Colombia; Delhi, India; Guangzhou, China; and Manila, Philippines have mean annual temperatures and seasonal temperature ranges that produced the largest epidemics. However, more temperate cities like Shanghai, China had high epidemic suitability because large seasonal variation offset moderate annual average temperatures. By accounting for seasonal variation in temperature, the model provides a baseline for mechanistically understanding environmental suitability for virus transmission by Aedes aegypti. Overlaying the impact of human activities and socioeconomic factors onto this mechanistic temperature-dependent framework is critical for understanding likelihood and magnitude of outbreaks.",
"title": ""
},
{
"docid": "77ce1cb8ed5676ad39f5de3e459c8b63",
"text": "DNNs (Deep Neural Networks) have demonstrated great success in numerous applications such as image classification, speech recognition, video analysis, etc. However, DNNs are much more computation-intensive and memory-intensive than previous shallow models. Thus, it is challenging to deploy DNNs in both large-scale data centers and real-time embedded systems. Considering performance, flexibility, and energy efficiency, FPGA-based accelerator for DNNs is a promising solution. Unfortunately, conventional accelerator design flows make it difficult for FPGA developers to keep up with the fast pace of innovations in DNNs. To overcome this problem, we propose FP-DNN (Field Programmable DNN), an end-to-end framework that takes TensorFlow-described DNNs as input, and automatically generates the hardware implementations on FPGA boards with RTL-HLS hybrid templates. FP-DNN performs model inference of DNNs with our high-performance computation engine and carefully-designed communication optimization strategies. We implement CNNs, LSTM-RNNs, and Residual Nets with FPDNN, and experimental results show the great performance and flexibility provided by our proposed FP-DNN framework.",
"title": ""
},
{
"docid": "8a91835866267ef83ba245c12ce1283d",
"text": "Due to the increasing demand in the agricultural industry, the need to effectively grow a plant and increase its yield is very important. In order to do so, it is important to monitor the plant during its growth period, as well as, at the time of harvest. In this paper image processing is used as a tool to monitor the diseases on fruits during farming, right from plantation to harvesting. For this purpose artificial neural network concept is used. Three diseases of grapes and two of apple have been selected. The system uses two image databases, one for training of already stored disease images and the other for implementation of query images. Back propagation concept is used for weight adjustment of training database. The images are classified and mapped to their respective disease categories on basis of three feature vectors, namely, color, texture and morphology. From these feature vectors morphology gives 90% correct result and it is more than other two feature vectors. This paper demonstrates effective algorithms for spread of disease and mango counting. Practical implementation of neural networks has been done using MATLAB.",
"title": ""
},
{
"docid": "b540fb20a265d315503543a5d752f486",
"text": "Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as welldefined quantifiers of a deep network’s expressive ability to model intricate correlation structures of its inputs. Most importantly, the construction of a deep convolutional arithmetic circuit in terms of a Tensor Network is made available. This description enables us to carry a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in the graph which characterizes it. Thus, we demonstrate a direct control over the inductive bias of the designed deep convolutional network via its channel numbers, which we show to be related to this min-cut in the underlying graph. This result is relevant to any practitioner designing a convolutional network for a specific task. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on more common convolutional networks which involve ReLU activations and max pooling. Beyond the results described above, the description of a deep convolutional network in well-defined graph-theoretic tools and the formal structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.",
"title": ""
},
{
"docid": "1cf4e42c496c97b5b153e680606cd07a",
"text": "The remarkable success of machine learning, especially deep learning, has produced a variety of cloud-based services for mobile users. Such services require an end user to send data to the service provider, which presents a serious challenge to end-user privacy. To address this concern, prior works either add noise to the data or send features extracted from the raw data. They struggle to balance between the utility and privacy because added noise reduces utility and raw data can be reconstructed from extracted features. This work represents a methodical departure from prior works: we balance between a measure of privacy and another of utility by leveraging adversarial learning to find a sweeter tradeoff. We design an encoder that optimizes against the reconstruction error (a measure of privacy), adversarially by a Decoder, and the inference accuracy (a measure of utility) by a Classifier. The result is RAN, a novel deep model with a new training algorithm that automatically extracts features for classification that are both private and useful. It turns out that adversarially forcing the extracted features to only conveys the intended information required by classification leads to an implicit regularization leading to better classification accuracy than the original model which completely ignores privacy. Thus, we achieve better privacy with better utility, a surprising possibility in machine learning! We conducted extensive experiments on five popular datasets over four training schemes, and demonstrate the superiority of RAN compared with existing alternatives.",
"title": ""
},
{
"docid": "6f9ae554513bba3c685f86909e31645f",
"text": "Triboelectric energy harvesting has been applied to various fields, from large-scale power generation to small electronics. Triboelectric energy is generated when certain materials come into frictional contact, e.g., static electricity from rubbing a shoe on a carpet. In particular, textile-based triboelectric energy-harvesting technologies are one of the most promising approaches because they are not only flexible, light, and comfortable but also wearable. Most previous textile-based triboelectric generators (TEGs) generate energy by vertically pressing and rubbing something. However, we propose a corrugated textile-based triboelectric generator (CT-TEG) that can generate energy by stretching. Moreover, the CT-TEG is sewn into a corrugated structure that contains an effective air gap without additional spacers. The resulting CT-TEG can generate considerable energy from various deformations, not only by pressing and rubbing but also by stretching. The maximum output performances of the CT-TEG can reach up to 28.13 V and 2.71 μA with stretching and releasing motions. Additionally, we demonstrate the generation of sufficient energy from various activities of a human body to power about 54 LEDs. These results demonstrate the potential application of CT-TEGs for self-powered systems.",
"title": ""
},
{
"docid": "227ad7173deb06c2d492bb27ce70f5df",
"text": "A public service motivation (PSM) inclines employees to provide effort out of concern for the impact of that effort on a valued social service. Though deemed to be important in the literature on public administration, this motivation has not been formally considered by economists. When a PSM exists, this paper establishes conditions under which government bureaucracy can better obtain PSM motivated effort from employees than a standard profit maximizing firm. The model also provides an efficiency rationale for low-powered incentives in both bureaucracies and other organizations producing social services. 2000 Elsevier Science S.A. All rights reserved.",
"title": ""
},
{
"docid": "be722a19b56ef604d6fe24012470e61f",
"text": "In this paper, we derive optimality results for greedy Bayesian-network search algo rithms that perform single-edge modifica tions at each step and use asymptotically consistent scoring criteria. Our results ex tend those of Meek (1997) and Chickering (2002), who demonstrate that in the limit of large datasets, if the generative distribu tion is perfect with respect to a DAG defined over the observable variables, such search al gorithms will identify this optimal (i.e. gen erative) DAG model. We relax their assump tion about the generative distribution, and assume only that this distribution satisfies the composition property over the observable variables, which is a more realistic assump tion for real domains. Under this assump tion, we guarantee that the search algorithms identify an inclusion-optimal model; that is, a model that (1) contains the generative dis tribution and (2) has no sub-model that con tains this distribution. In addition, we show that the composition property is guaranteed to hold whenever the dependence relation ships in the generative distribution can be characterized by paths between singleton el ements in some generative graphical model (e.g. a DAG, a chain graph, or a Markov network) even when the generative model in cludes unobserved variables, and even when the observed data is subject to selection bias.",
"title": ""
},
{
"docid": "e322a4f6d36ccc561b6b793ef85db9c2",
"text": "Abdominal bracing is often adopted in fitness and sports conditioning programs. However, there is little information on how muscular activities during the task differ among the muscle groups located in the trunk and from those during other trunk exercises. The present study aimed to quantify muscular activity levels during abdominal bracing with respect to muscle- and exercise-related differences. Ten healthy young adult men performed five static (abdominal bracing, abdominal hollowing, prone, side, and supine plank) and five dynamic (V- sits, curl-ups, sit-ups, and back extensions on the floor and on a bench) exercises. Surface electromyogram (EMG) activities of the rectus abdominis (RA), external oblique (EO), internal oblique (IO), and erector spinae (ES) muscles were recorded in each of the exercises. The EMG data were normalized to those obtained during maximal voluntary contraction of each muscle (% EMGmax). The % EMGmax value during abdominal bracing was significantly higher in IO (60%) than in the other muscles (RA: 18%, EO: 27%, ES: 19%). The % EMGmax values for RA, EO, and ES were significantly lower in the abdominal bracing than in some of the other exercises such as V-sits and sit-ups for RA and EO and back extensions for ES muscle. However, the % EMGmax value for IO during the abdominal bracing was significantly higher than those in most of the other exercises including dynamic ones such as curl-ups and sit-ups. These results suggest that abdominal bracing is one of the most effective techniques for inducing a higher activation in deep abdominal muscles, such as IO muscle, even compared to dynamic exercises involving trunk flexion/extension movements. Key PointsTrunk muscle activities during abdominal bracing was examined with regard to muscle- and exercise-related differences.Abdominal bracing preferentially activates internal oblique muscles even compared to dynamic exercises involving trunk flexion/extension movements.Abdominal bracing should be included in exercise programs when the goal is to improve spine stability.",
"title": ""
},
{
"docid": "a0e9e04a3b04c1974951821d44499fa7",
"text": "PURPOSE\nTo examine factors related to turnover of new graduate nurses in their first job.\n\n\nDESIGN\nData were obtained from a 3-year panel survey (2006-2008) of the Graduates Occupational Mobility Survey that followed-up college graduates in South Korea. The sample consisted of 351 new graduates whose first job was as a full-time registered nurse in a hospital.\n\n\nMETHODS\nSurvival analysis was conducted to estimate survival curves and related factors, including individual and family, nursing education, hospital, and job dissatisfaction (overall and 10 specific job aspects).\n\n\nFINDINGS\nThe estimated probabilities of staying in their first job for 1, 2, and 3 years were 0.823, 0.666, and 0.537, respectively. Nurses reporting overall job dissatisfaction had significantly lower survival probabilities than those who reported themselves to be either neutral or satisfied. Nurses were more likely to leave if they were married or worked in small (vs. large), nonmetropolitan, and nonunionized hospitals. Dissatisfaction with interpersonal relationships, work content, and physical work environment was associated with a significant increase in the hazards of leaving the first job.\n\n\nCONCLUSIONS\nHospital characteristics as well as job satisfaction were significantly associated with new graduates' turnover.\n\n\nCLINICAL RELEVANCE\nThe high turnover of new graduates could be reduced by improving their job satisfaction, especially with interpersonal relationships, work content, and the physical work environment.",
"title": ""
},
{
"docid": "4bcc31d35cf2e413f984a7b9b9d5d47f",
"text": "Abstractive text summarization is a blossoming area of natural language processing research in which short textual summaries are generated from longer input documents. Existing state-of-the-art methods take long time to train, and are limited to functioning on relatively short input sequences. We evaluate neural network architectures with simplified encoder stages, which naturally support arbitrarily long input sequences in a computationally efficient manner.ive text summarization is a blossoming area of natural language processing research in which short textual summaries are generated from longer input documents. Existing state-of-the-art methods take long time to train, and are limited to functioning on relatively short input sequences. We evaluate neural network architectures with simplified encoder stages, which naturally support arbitrarily long input sequences in a computationally efficient manner.",
"title": ""
},
{
"docid": "f9effb8f9a0a2966c5f4bcf8b420177e",
"text": "This paper identifies a new opportunity for improving the efficiency of a processor core: memory access phases of programs. These are dynamic regions of programs where most of the instructions are devoted to memory access or address computation. These occur naturally in programs because of workload properties, or when employing an in-core accelerator, we get induced phases where the code execution on the core is access code. We observe such code requires an OOO core's dataflow and dynamism to run fast and does not execute well on an in-order processor. However, an OOO core consumes much power, effectively increasing energy consumption and reducing the energy efficiency of in-core accelerators.\n We develop an execution model called memory access dataflow (MAD) that encodes dataflow computation, event-condition-action rules, and explicit actions. Using it we build a specialized engine that provides an OOO core's performance but at a fraction of the power. Such an engine can serve as a general way for any accelerator to execute its respective induced phase, thus providing a common interface and implementation for current and future accelerators. We have designed and implemented MAD in RTL, and we demonstrate its generality and flexibility by integration with four diverse accelerators (SSE, DySER, NPU, and C-Cores). Our quantitative results show, relative to in-order, 2-wide OOO, and 4-wide OOO, MAD provides 2.4×, 1.4× and equivalent performance respectively. It provides 0.8×, 0.6× and 0.4× lower energy.",
"title": ""
},
{
"docid": "6d2efd95c2b3486bec5b4c2ab2db18ad",
"text": "The goal of this work is to replace objects in an RGB-D scene with corresponding 3D models from a library. We approach this problem by first detecting and segmenting object instances in the scene using the approach from Gupta et al. [13]. We use a convolutional neural network (CNN) to predict the pose of the object. This CNN is trained using pixel normals in images containing rendered synthetic objects. When tested on real data, it outperforms alternative algorithms trained on real data. We then use this coarse pose estimate along with the inferred pixel support to align a small number of prototypical models to the data, and place the model that fits the best into the scene. We observe a 48% relative improvement in performance at the task of 3D detection over the current state-of-the-art [33], while being an order of magnitude faster at the same time.",
"title": ""
},
{
"docid": "e14cd8d955d80591f905b3858c9b5d09",
"text": "With the advent of the Internet of Things (IoT), security has emerged as a major design goal for smart connected devices. This explosion in connectivity created a larger attack surface area. Software-based approaches have been applied for security purposes; however, these methods must be extended with security-oriented technologies that promote hardware as the root of trust. The ARM TrustZone can enable trusted execution environments (TEEs), but existing solutions disregard real-time needs. Here, the authors demonstrate why TrustZone is becoming a reference technology for securing IoT edge devices, and how enhanced TEEs can help meet industrial IoT applications real-time requirements.",
"title": ""
},
{
"docid": "2ea280e6e5e1118a1fb2f538e9a9e621",
"text": "As the number of agents grows in a multi-agent system, it is very impractical to have a team that is preprogrammed to share a cooperation protocol. In this situation experienced agents that have been trained to cooperate with different teams come in handy. In this paper based on some UAV teams, we investigate the behavior of an experienced agent which we call ad-hoc agent. First, we train the ad-hoc agent to cooperate with different teams and in different environmental situations. This training is based on an approximated version of MDP which is very fast. Then the ad-hoc agent joins the teams and tries to cooperate with them in a Persistent Surveillance Mission(PSM). Our experiment for the ad-hoc agent starts with joining different fixed strategy teams and ends with joining a fully cooperative team. The results of the simulation show that the performance of a team having an ad-hoc agent is even comparable to a team that has completely trained together.",
"title": ""
},
{
"docid": "564322060dee31328da7b3bc3d762f95",
"text": "The automatic detection and transcription of musical chords from audio is an established music computing task. The choice of chord profiles and higher-level time-series modelling have received a lot of attention, resulting in methods with an overall performance of more than 70% in the MIREX Chord Detection task 2009. Research on the front end of chord transcription algorithms has often concentrated on finding good chord templates to fit the chroma features. In this paper we reverse this approach and seek to find chroma features that are more suitable for usage in a musically-motivated model. We do so by performing a prior approximate transcription using an existing technique to solve non-negative least squares problems (NNLS). The resulting NNLS chroma features are tested by using them as an input to an existing state-of-the-art high-level model for chord transcription. We achieve very good results of 80% accuracy using the song collection and metric of the 2009 MIREX Chord Detection tasks. This is a significant increase over the top result (74%) in MIREX 2009. The nature of some chords makes their identification particularly susceptible to confusion between fundamental frequency and partials. We show that the recognition of these diffcult chords in particular is substantially improved by the prior approximate transcription using NNLS.",
"title": ""
},
{
"docid": "a2956ccd41684096197e426959f15300",
"text": "State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.",
"title": ""
},
{
"docid": "a0c9d3c2b14395a6d476b12c5e8b28b0",
"text": "Undergraduate research experiences enhance learning and professional development, but providing effective and scalable research training is often limited by practical implementation and orchestration challenges. We demonstrate Agile Research Studios (ARS)---a socio-technical system that expands research training opportunities by supporting research communities of practice without increasing faculty mentoring resources.",
"title": ""
}
] | scidocsrr |
7f2303e532cd188758f34799820759d4 | RUN: Residual U-Net for Computer-Aided Detection of Pulmonary Nodules without Candidate Selection | [
{
"docid": "bf85db5489a61b5fca8d121de198be97",
"text": "In this paper, we propose a novel recursive recurrent neural network (R2NN) to model the end-to-end decoding process for statistical machine translation. R2NN is a combination of recursive neural network and recurrent neural network, and in turn integrates their respective capabilities: (1) new information can be used to generate the next hidden state, like recurrent neural networks, so that language model and translation model can be integrated naturally; (2) a tree structure can be built, as recursive neural networks, so as to generate the translation candidates in a bottom up manner. A semi-supervised training approach is proposed to train the parameters, and the phrase pair embedding is explored to model translation confidence directly. Experiments on a Chinese to English translation task show that our proposed R2NN can outperform the stateof-the-art baseline by about 1.5 points in BLEU.",
"title": ""
}
] | [
{
"docid": "0ab6ee50661e92fe7935ddd2c447f793",
"text": "In this paper, a high-performance single-phase transformerless online uninterruptible power supply (UPS) is proposed. The proposed UPS is composed of a four-leg-type converter, which operates as a rectifier, a battery charger/discharger, and an inverter. The rectifier has the capability of power-factor collection and regulates a constant dc-link voltage. The battery charger/discharger eliminates the need for the transformer and the increase of the number of battery and supplies the power demanded by the load to the dc-link capacitor in the event of the input-power failure or abrupt decrease of the input voltage. The inverter provides a regulated sinusoidal output voltage to the load and limits the output current under an impulsive load. The control of the dc-link voltage enhances the transient response of the output voltage and the utilization of the input power. By utilizing the battery charger/discharger, the overall efficiency of the system is improved, and the size, weight, and cost of the system are significantly reduced. Experimental results obtained with a 3-kVA prototype show a normal efficiency of over 95.6% and an input power factor of over 99.7%.",
"title": ""
},
{
"docid": "84d2e697b2f2107d34516909f22768c6",
"text": "PURPOSE\nSchema therapy was first applied to individuals with borderline personality disorder (BPD) over 20 years ago, and more recent work has suggested efficacy across a range of disorders. The present review aimed to systematically synthesize evidence for the efficacy and effectiveness of schema therapy in reducing early maladaptive schema (EMS) and improving symptoms as applied to a range of mental health disorders in adults including BPD, other personality disorders, eating disorders, anxiety disorders, and post-traumatic stress disorder.\n\n\nMETHODS\nStudies were identified through electronic searches (EMBASE, PsycINFO, MEDLINE from 1990 to January 2016).\n\n\nRESULTS\nThe search produced 835 titles, of which 12 studies were found to meet inclusion criteria. A significant number of studies of schema therapy treatment were excluded as they failed to include a measure of schema change. The Clinical Trial Assessment Measure was used to rate the methodological quality of studies. Schema change and disorder-specific symptom change was found in 11 of the 12 studies.\n\n\nCONCLUSIONS\nSchema therapy has demonstrated initial significant results in terms of reducing EMS and improving symptoms for personality disorders, but formal mediation analytical studies are lacking and rigorous evidence for other mental health disorders is currently sparse.\n\n\nPRACTITIONER POINTS\nFirst review to investigate whether schema therapy leads to reduced maladaptive schemas and symptoms across mental health disorders. Limited evidence for schema change with schema therapy in borderline personality disorder (BPD), with only three studies conducting correlational analyses. Evidence for schema and symptom change in other mental health disorders is sparse, and so use of schema therapy for disorders other than BPD should be based on service user/patient preference and clinical expertise and/or that the theoretical underpinnings of schema therapy justify the use of it therapeutically. Further work is needed to develop the evidence base for schema therapy for other disorders.",
"title": ""
},
{
"docid": "b93919bbb2dab3a687cccb71ee515793",
"text": "The processing and analysis of colour images has become an important area of study and application. The representation of the RGB colour space in 3D-polar coordinates (hue, saturation and brightness) can sometimes simplify this task by revealing characteristics not visible in the rectangular coordinate representation. The literature describes many such spaces (HLS, HSV, etc.), but many of them, having been developed for computer graphics applications, are unsuited to image processing and analysis tasks. We describe the flaws present in these colour spaces, and present three prerequisites for 3D-polar coordinate colour spaces well-suited to image processing and analysis. We then derive 3D-polar coordinate representations which satisfy the prerequisites, namely a space based on the norm which has efficient linear transform functions to and from the RGB space; and an improved HLS (IHLS) space. The most important property of this latter space is a “well-behaved” saturation coordinate which, in contrast to commonly used ones, always has a small numerical value for near-achromatic colours, and is completely independent of the brightness function. Three applications taking advantage of the good properties of the IHLS space are described: the calculation of a saturation-weighted hue mean and of saturation-weighted hue histograms, and feature extraction using mathematical morphology. 1Updated July 16, 2003. 2Jean Serra is with the Centre de Morphologie Mathématique, Ecole des Mines de Paris, 35 rue Saint-Honoré, 77305 Fontainebleau cedex, France.",
"title": ""
},
{
"docid": "3d25100e6a9410c6c08fae14135043d0",
"text": "We propose to learn semantic spatio-temporal embeddings for videos to support high-level video analysis. The first step of the proposed embedding employs a deep architecture consisting of two channels of convolutional neural networks (capturing appearance and local motion) followed by their corresponding Gated Recurrent Unit encoders for capturing longer-term temporal structure of the CNN features. The resultant spatio-temporal representation (a vector) is used to learn a mapping via a multilayer perceptron to the word2vec semantic embedding space, leading to a semantic interpretation of the video vector that supports high-level analysis. We demonstrate the usefulness and effectiveness of this new video representation by experiments on action recognition, zero-shot video classification, and “word-to-video” retrieval, using the UCF-101 dataset.",
"title": ""
},
{
"docid": "a4d7596cfcd4a9133c5677a481c88cf0",
"text": "The understanding of where humans look in a scene is a problem of great interest in visual perception and computer vision. When eye-tracking devices are not a viable option, models of human attention can be used to predict fixations. In this paper we give two contribution. First, we show a model of visual attention that is simply based on deep convolutional neural networks trained for object classification tasks. A method for visualizing saliency maps is defined which is evaluated in a saliency prediction task. Second, we integrate the information of these maps with a bottom-up differential model of eye-movements to simulate visual attention scanpaths. Results on saliency prediction and scores of similarity with human scanpaths demonstrate the effectiveness of this model.",
"title": ""
},
{
"docid": "37e65ab2fc4d0a9ed5b8802f41a1a2a2",
"text": "This paper is based on a panel discussion held at the Artificial Intelligence in Medicine Europe (AIME) conference in Amsterdam, The Netherlands, in July 2007. It had been more than 15 years since Edward Shortliffe gave a talk at AIME in which he characterized artificial intelligence (AI) in medicine as being in its \"adolescence\" (Shortliffe EH. The adolescence of AI in medicine: will the field come of age in the '90s? Artificial Intelligence in Medicine 1993;5:93-106). In this article, the discussants reflect on medical AI research during the subsequent years and characterize the maturity and influence that has been achieved to date. Participants focus on their personal areas of expertise, ranging from clinical decision-making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modeling of expertise and the creation of acceptable systems.",
"title": ""
},
{
"docid": "7263e768247914490f3b91c916587614",
"text": "Activity Recognition is an emerging field of research, born from the larger fields of ubiquitous computing, context-aware computing and multimedia. Recently, recognizing everyday life activities becomes one of the challenges for pervasive computing. In our work, we developed a novel wearable system easy to use and comfortable to bring. Our wearable system is based on a new set of 20 computationally efficient features and the Random Forest classifier. We obtain very encouraging results with classification accuracy of human activities recognition of up",
"title": ""
},
{
"docid": "de3789fe0dccb53fe8555e039fde1bc6",
"text": "Estimating consumer surplus is challenging because it requires identification of the entire demand curve. We rely on Uber’s “surge” pricing algorithm and the richness of its individual level data to first estimate demand elasticities at several points along the demand curve. We then use these elasticity estimates to estimate consumer surplus. Using almost 50 million individuallevel observations and a regression discontinuity design, we estimate that in 2015 the UberX service generated about $2.9 billion in consumer surplus in the four U.S. cities included in our analysis. For each dollar spent by consumers, about $1.60 of consumer surplus is generated. Back-of-the-envelope calculations suggest that the overall consumer surplus generated by the UberX service in the United States in 2015 was $6.8 billion.",
"title": ""
},
{
"docid": "47e9515f703c840c38ab0c3095f48a3a",
"text": "Hnefatafl is an ancient Norse game - an ancestor of chess. In this paper, we report on the development of computer players for this game. In the spirit of Blondie24, we evolve neural networks as board evaluation functions for different versions of the game. An unusual aspect of this game is that there is no general agreement on the rules: it is no longer much played, and game historians attempt to infer the rules from scraps of historical texts, with ambiguities often resolved on gut feeling as to what the rules must have been in order to achieve a balanced game. We offer the evolutionary method as a means by which to judge the merits of alternative rule sets",
"title": ""
},
{
"docid": "da63c4d9cc2f3278126490de54c34ce5",
"text": "The growth of Web-based social networking and the properties of those networks have created great potential for producing intelligent software that integrates a user's social network and preferences. Our research looks particularly at assigning trust in Web-based social networks and investigates how trust information can be mined and integrated into applications. This article introduces a definition of trust suitable for use in Web-based social networks with a discussion of the properties that will influence its use in computation. We then present two algorithms for inferring trust relationships between individuals that are not directly connected in the network. Both algorithms are shown theoretically and through simulation to produce calculated trust values that are highly accurate.. We then present TrustMail, a prototype email client that uses variations on these algorithms to score email messages in the user's inbox based on the user's participation and ratings in a trust network.",
"title": ""
},
{
"docid": "ac56eb533e3ae40b8300d4269fd2c08f",
"text": "We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.",
"title": ""
},
{
"docid": "7f47434e413230faf04849cf43a845fa",
"text": "Although surgical resection remains the gold standard for treatment of liver cancer, there is a growing need for alternative therapies. Microwave ablation (MWA) is an experimental procedure that has shown great promise for the treatment of unresectable tumors and exhibits many advantages over other alternatives to resection, such as radiofrequency ablation and cryoablation. However, the antennas used to deliver microwave power largely govern the effectiveness of MWA. Research has focused on coaxial-based interstitial antennas that can be classified as one of three types (dipole, slot, or monopole). Choked versions of these antennas have also been developed, which can produce localized power deposition in tissue and are ideal for the treatment of deepseated hepatic tumors.",
"title": ""
},
{
"docid": "9e439c83f4c29b870b1716ceae5aa1f3",
"text": "Suspension system plays an imperative role in retaining the continuous road wheel contact for better road holding. In this paper, fuzzy self-tuning of PID controller is designed to control of active suspension system for quarter car model. A fuzzy self-tuning is used to develop the optimal control gain for PID controller (proportional, integral, and derivative gains) to minimize suspension working space of the sprung mass and its change rate to achieve the best comfort of the driver. The results of active suspension system with fuzzy self-tuning PID controller are presented graphically and comparisons with the PID and passive system. It is found that, the effectiveness of using fuzzy self-tuning appears in the ability to tune the gain parameters of PID controller",
"title": ""
},
{
"docid": "e25b5b0f51f9c00515a849f5fd05d39b",
"text": "These are exciting times for research into the psychological processes underlying second language acquisition (SLA). In the 1970s, SLA emerged as a field of inquiry in its own right (Brown 1980), and in the 1980s, a number of different approaches to central questions in the field began to develop in parallel and in relative isolation (McLaughlin and Harrington 1990). In the 1990s, however, these different approaches began to confront one another directly. Now we are entering a period reminiscent, in many ways, of the intellectually turbulent times following the Chomskyan revolution (Chomsky 1957; 1965). Now, as then, researchers are debating basic premises of a science of mind, language, and learning. Some might complain, not entirely without reason, that we are still debating the same issues after 30-40 years. However, there are now new conceptual and research tools available to test hypotheses in ways previously thought impossible. Because of this, many psychologists believe there will soon be significant advancement on some SLA issues that have resisted closure for decades. We outline some of these developments and explore where the field may be heading. More than ever, it appears possible that psychological theory and SLA theory are converging on solutions to common issues.",
"title": ""
},
{
"docid": "f57bcea5431a11cc431f76727ba81a26",
"text": "We develop a Bayesian procedure for estimation and inference for spatial models of roll call voting. This approach is extremely flexible, applicable to any legislative setting, irrespective of size, the extremism of the legislators’ voting histories, or the number of roll calls available for analysis. The model is easily extended to let other sources of information inform the analysis of roll call data, such as the number and nature of the underlying dimensions, the presence of party whipping, the determinants of legislator preferences, and the evolution of the legislative agenda; this is especially helpful since generally it is inappropriate to use estimates of extant methods (usually generated under assumptions of sincere voting) to test models embodying alternate assumptions (e.g., log-rolling, party discipline). A Bayesian approach also provides a coherent framework for estimation and inference with roll call data that eludes extant methods; moreover, via Bayesian simulation methods, it is straightforward to generate uncertainty assessments or hypothesis tests concerning any auxiliary quantity of interest or to formally compare models. In a series of examples we show how our method is easily extended to accommodate theoretically interesting models of legislative behavior. Our goal is to provide a statistical framework for combining the measurement of legislative preferences with tests of models of legislative behavior.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "f2346fffa0297554440145a3165e921e",
"text": "The proliferation of knowledge-sharing communities like Wikipedia and the advances in automated information extraction from Web pages enable the construction of large knowledge bases with facts about entities and their relationships. The facts can be represented in the RDF data model, as so-called subject-property-object triples, and can thus be queried by structured query languages like SPARQL. In principle, this allows precise querying in the database spirit. However, RDF data may be highly diverse and queries may return way too many results, so that ranking by informativeness measures is crucial to avoid overwhelming users. Moreover, as facts are extracted from textual contexts or have community-provided annotations, it can be beneficial to consider also keywords for formulating search requests. This paper gives an overview of recent and ongoing work on ranked retrieval of RDF data with keyword-augmented structured queries. The ranking method is based on statistical language models, the state-of-the-art paradigm in information retrieval. The paper develops a novel form of language models for the structured, but schema-less setting of RDF triples and extended SPARQL queries. 1 Motivation and Background Entity-Relationship graphs are receiving great attention for information management outside of mainstream database engines. In particular, the Semantic-Web data model RDF (Resource Description Format) is gaining popularity for applications on scientific data such as biological networks [14], social Web2.0 applications [4], large-scale knowledge bases such as DBpedia [2] or YAGO [13], and more generally, as a light-weight representation for the “Web of data” [5]. An RDF data collection consists of a set of subject-property-object triples, SPO triples for short. In ER terminology, an SPO triple corresponds to a pair of entities connected by a named relationship or to an entity connected to the value of a named attribute. As the object of a triple can in turn be the subject of other triples, we can also view the RDF data as a graph of typed nodes and typed edges where nodes correspond to entities and edges to relationships (viewing attributes as relations as well). Some of the existing RDF collections contain more than a billion triples. As a simple example that we will use throughout the paper, consider a Web portal on movies. Table 1 shows a few sample triples. The example illustrates a number of specific requirements that RDF data poses for querying: Copyright 0000 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering",
"title": ""
},
{
"docid": "e9c4877bca5f1bfe51f97818cc4714fa",
"text": "INTRODUCTION Gamification refers to the application of game dynamics, mechanics, and frameworks into non-game settings. Many educators have attempted, with varying degrees of success, to effectively utilize game dynamics to increase student motivation and achievement in the classroom. In an effort to better understand how gamification can effectively be utilized to this end, presented here is a review of existing literature on the subject as well as a case study on three different applications of gamification in the post-secondary setting. This analysis reveals that the underlying dynamics that make games engaging are largely already recognized and utilized in modern pedagogical practices, although under different designations. This provides some legitimacy to a practice that is sometimes dismissed as superficial, and also provides a way of formulating useful guidelines for those wishing to utilize the power of games to motivate student achievement. RELATED WORK The first step of this study was to review literature related to the use of gamification in education. This was undertaken in order to inform the subsequent case studies. Several works were reviewed with the intention of finding specific game dynamics that were met with a certain degree of success across a number of circumstances. To begin, Jill Laster [10] provides a brief summary of the early findings of Lee Sheldon, an assistant professor at Indiana University at Bloomington and the author of The Multiplayer Classroom: Designing Coursework as a Game [16]. Here, Sheldon reports that the gamification of his class on multiplayer game design at Indiana University at Bloomington in 2010 was a success, with the average grade jumping a full letter grade from the previous year [10]. Sheldon gamified his class by renaming the performance of presentations as 'completing quests', taking tests as 'fighting monsters', writing papers as 'crafting', and receiving letter grades as 'gaining experience points'. In particular, he notes that changing the language around grades celebrates getting things right rather than punishing getting things wrong [10]. Although this is plausible, this example is included here first because it points to the common conception of what gamifying a classroom means: implementing game components by simply trading out the parlance of pedagogy for that of gaming culture. Although its intentions are good, it is this reduction of game design to its surface characteristics that Elizabeth Lawley warns is detrimental to the successful gamification of a classroom [5]. Lawley, a professor of interactive games and media at the Rochester Institute of Technology (RIT), notes that when implemented properly, \"gamification can help enrich educational experiences in a way that students will recognize and respond to\" [5]. However, she warns that reducing the complexity of well designed games to their surface elements (i.e. badges and experience points) falls short of engaging students. She continues further, suggesting that beyond failing to engage, limiting the implementation of game dynamics to just the surface characteristics can actually damage existing interest and engagement [5]. Lawley is not suggesting that game elements should be avoided, but rather she is stressing the importance of allowing them to surface as part of a deeper implementation that includes the underlying foundations of good game design. Upon reviewing the available literature, certain underlying dynamics and concepts found in game design are shown to be more consistently successful than others when applied to learning environments, these are: o Freedom to Fail o Rapid Feedback o Progression o Storytelling Freedom to Fail Game design often encourages players to experiment without fear of causing irreversible damage by giving them multiple lives, or allowing them to start again at the most recent 'checkpoint'. Incorporating this 'freedom to fail' into classroom design is noted to be an effective dynamic in increasing student engagement [7,9,11,15]. If students are encouraged to take risks and experiment, the focus is taken away from final results and re-centered on the process of learning instead. The effectiveness of this change in focus is recognized in modern pedagogy as shown in the increased use of formative assessment. Like the game dynamic of having the 'freedom to fail', formative assessment focuses on the process of learning rather than the end result by using assessment to inform subsequent lessons and separating assessment from grades whenever possible [17]. This can mean that the student is using ongoing self assessment, or that the teacher is using",
"title": ""
},
{
"docid": "019375c14bc0377acbf259ef423fa46f",
"text": "Original approval signatures are on file with the University of Oregon Graduate School.",
"title": ""
},
{
"docid": "78ced4f3e99c5abc1a3f5e81fbc63106",
"text": "This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles.",
"title": ""
}
] | scidocsrr |
f28662555a0c4bea946168cb47ac0b27 | High-Performance Neural Networks for Visual Object Classification | [
{
"docid": "27ad413fa5833094fb2e557308fa761d",
"text": "A common practice to gain invariant features in object recognition models is to aggregate multiple low-level features over a small neighborhood. However, the differences between those models makes a comparison of the properties of different aggregation functions hard. Our aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks. Empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Despite their shift-invariant properties, overlapping pooling windows are no significant improvement over non-overlapping pooling windows. By applying this knowledge, we achieve state-of-the-art error rates of 4.57% on the NORB normalized-uniform dataset and 5.6% on the NORB jittered-cluttered dataset.",
"title": ""
},
{
"docid": "0a3f5ff37c49840ec8e59cbc56d31be2",
"text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.",
"title": ""
}
] | [
{
"docid": "fbb6c8566fbe79bf8f78af0dc2dedc7b",
"text": "Automatic essay evaluation (AEE) systems are designed to assist a teacher in the task of classroom assessment in order to alleviate the demands of manual subject evaluation. However, although numerous AEE systems are available, most of these systems do not use elaborate domain knowledge for evaluation, which limits their ability to give informative feedback to students and also their ability to constructively grade a student based on a particular domain of study. This paper is aimed at improving on the achievements of previous studies by providing a subject-focussed evaluation system that considers the domain knowledge while scoring and provides informative feedback to its user. The study employs a combination of techniques such as system design and modelling using Unified Modelling Language (UML), information extraction, ontology development, data management, and semantic matching in order to develop a prototype subject-focussed AEE system. The developed system was evaluated to determine its level of performance and usability. The result of the usability evaluation showed that the system has an overall mean rating of 4.17 out of maximum of 5, which indicates ‘good usability’. In terms of performance, the assessment done by the system was also found to have sufficiently high correlation with those done by domain experts, in addition to providing appropriate feedback to the user.",
"title": ""
},
{
"docid": "da1d1e9ddb5215041b9565044b9feecb",
"text": "As multiprocessors with large numbers of processors become more prevalent, we face the task of developing scheduling algorithms for the multiprogrammed use of such machines. The scheduling decisions must take into account the number of processors available, the overall system load, and the ability of each application awaiting activation to make use of a given number of processors.\nThe parallelism within an application can be characterized at a number of different levels of detail. At the highest level, it might be characterized by a single parameter (such as the proportion of the application that is sequential, or the average number of processors the application would use if an unlimited number of processors were available). At the lowest level, representing all the parallelism in the application requires the full data dependency graph (which is more information than is practically manageable).\nIn this paper, we examine the quality of processor allocation decisions under multiprogramming that can be made with several different high-level characterizations of application parallelism. We demonstrate that decisions based on parallelism characterizations with two to four parameters are superior to those based on single-parameter characterizations (such as fraction sequential or average parallelism). The results are based predominantly on simulation, with some guidance from a simple analytic model.",
"title": ""
},
{
"docid": "460238e247fc60b0ca300ba9caafdc97",
"text": "Time-resolved optical spectroscopy is widely used to study vibrational and electronic dynamics by monitoring transient changes in excited state populations on a femtosecond timescale. Yet the fundamental cause of electronic and vibrational dynamics—the coupling between the different energy levels involved—is usually inferred only indirectly. Two-dimensional femtosecond infrared spectroscopy based on the heterodyne detection of three-pulse photon echoes has recently allowed the direct mapping of vibrational couplings, yielding transient structural information. Here we extend the approach to the visible range and directly measure electronic couplings in a molecular complex, the Fenna–Matthews–Olson photosynthetic light-harvesting protein. As in all photosynthetic systems, the conversion of light into chemical energy is driven by electronic couplings that ensure the efficient transport of energy from light-capturing antenna pigments to the reaction centre. We monitor this process as a function of time and frequency and show that excitation energy does not simply cascade stepwise down the energy ladder. We find instead distinct energy transport pathways that depend sensitively on the detailed spatial properties of the delocalized excited-state wavefunctions of the whole pigment–protein complex.",
"title": ""
},
{
"docid": "486dae23f5a7b19cf8c20fab60de6b0f",
"text": "Histopathological alterations induced by paraquat in the digestive gland of the freshwater snail Lymnaea luteola were investigated. Samples were collected from the Kondakarla lake (Visakhapatnam, Andhra Pradesh, India), where agricultural activities are widespread. Acute toxicity of series of concentration of paraquat to Lymnaea luteola was determined by recording snail mortality of 24, 48, 72 and 96 hrs exposures. The Lc50 value based on probit analysis was found to be 0.073 ml/L for 96 hrs of exposure to the herbicide. Results obtained shown that there were no mortality of snail either in control and those exposed to 0.0196 ml/L paraquat throughout the 96 hrs 100% mortality was recorded with 48hrs on exposed to 0.790 ppm concentration of stock solution of paraquat. At various concentrations paraquat causes significant dose dependent histopathological changes in the digestive gland of L.luteola. The histopathological examinations revealed the following changes: amebocytes infiltrations, the lumen of digestive gland tubule was shrunken; degeneration of cells, secretory cells became irregular, necrosis of cells and atrophy in the connective tissue of digestive gland.",
"title": ""
},
{
"docid": "ab7663ef08505e37be080eab491d2607",
"text": "This paper has studied the fatigue and friction of big end bearing on an engine connecting rod by combining the multi-body dynamics and hydrodynamic lubrication model. First, the basic equations and the application on AVL-Excite software platform of multi-body dynamics have been described in detail. Then, introduce the hydrodynamic lubrication model, which is the extended Reynolds equation derived from the Navier-Stokes equation and the equation of continuity. After that, carry out the static calculation of connecting rod assembly. At the same time, multi-body dynamics analysis has been performed and stress history can be obtained by finite element data recovery. Next, execute the fatigue analysis combining the Static stress and dynamic stress, safety factor distribution of connecting rod will be obtained as result. At last, detailed friction analysis of the big-end bearing has been performed. And got a good agreement when contrast the simulation results to the Bearing wear in the experiment.",
"title": ""
},
{
"docid": "d390b0e5b1892297af37659fb92c03b5",
"text": "Encouraged by recent waves of successful applications of deep learning, some researchers have demonstrated the effectiveness of applying convolutional neural networks (CNN) to time series classification problems. However, CNN and other traditional methods require the input data to be of the same dimension which prevents its direct application on data of various lengths and multi-channel time series with different sampling rates across channels. Long short-term memory (LSTM), another tool in the deep learning arsenal and with its design nature, is more appropriate for problems involving time series such as speech recognition and language translation. In this paper, we propose a novel model incorporating a sequence-to-sequence model that consists two LSTMs, one encoder and one decoder. The encoder LSTM accepts input time series of arbitrary lengths, extracts information from the raw data and based on which the decoder LSTM constructs fixed length sequences that can be regarded as discriminatory features. For better utilization of the raw data, we also introduce the attention mechanism into our model so that the feature generation process can peek at the raw data and focus its attention on the part of the raw data that is most relevant to the feature under construction. We call our model S2SwA, as the short for Sequence-to-Sequence with Attention. We test S2SwA on both uni-channel and multi-channel time series datasets and show that our model is competitive with the state-of-the-art in real world tasks such as human activity recognition.",
"title": ""
},
{
"docid": "7374e16190e680669f76fc7972dc3975",
"text": "Open-plan office layout is commonly assumed to facilitate communication and interaction between co-workers, promoting workplace satisfaction and team-work effectiveness. On the other hand, open-plan layouts are widely acknowledged to be more disruptive due to uncontrollable noise and loss of privacy. Based on the occupant survey database from Center for the Built Environment (CBE), empirical analyses indicated that occupants assessed Indoor Environmental Quality (IEQ) issues in different ways depending on the spatial configuration (classified by the degree of enclosure) of their workspace. Enclosed private offices clearly outperformed open-plan layouts in most aspects of IEQ, particularly in acoustics, privacy and the proxemics issues. Benefits of enhanced ‘ease of interaction’ were smaller than the penalties of increased noise level and decreased privacy resulting from open-plan office configuration.",
"title": ""
},
{
"docid": "61309b5f8943f3728f714cd40f260731",
"text": "Article history: Received 4 January 2011 Received in revised form 1 August 2011 Accepted 13 August 2011 Available online 15 September 2011 Advertising media are a means of communication that creates different marketing and communication results among consumers. Over the years, newspaper, magazine, TV, and radio have provided a one-way media where information is broadcast and communicated. Due to the widespread application of the Internet, advertising has entered into an interactive communications mode. In the advent of 3G broadband mobile communication systems and smartphone devices, consumers' preferences can be pre-identified and advertising messages can therefore be delivered to consumers in a multimedia format at the right time and at the right place with the right message. In light of this new advertisement possibility, designing personalized mobile advertising to meet consumers' needs becomes an important issue. This research uses the fuzzy Delphi method to identify the key personalized attributes in a personalized mobile advertising message for different products. Results of the study identify six important design attributes for personalized advertisements: price, preference, promotion, interest, brand, and type of mobile device. As personalized mobile advertising becomes more integrated in people's daily activities, its pros and cons and social impact are also discussed. The research result can serve as a guideline for the key parties in mobile marketing industry to facilitate the development of the industry and ensure that advertising resources are properly used. © 2011 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "e4a1200b7f8143b1322c8a66d625d842",
"text": "This paper examines the spatial patterns of unemployment in Chicago between 1980 and 1990. We study unemployment clustering with respect to different social and economic distance metrics that reßect the structure of agents social networks. SpeciÞcally, we use physical distance, travel time, and differences in ethnic and occupational distribution between locations. Our goal is to determine whether our estimates of spatial dependence are consistent with models in which agents employment status is affected by information exchanged locally within their social networks. We present non-parametric estimates of correlation across Census tracts as a function of each distance metric as well as pairs of metrics, both for unemployment rate itself and after conditioning on a set of tract characteristics. Our results indicate that there is a strong positive and statistically signiÞcant degree of spatial dependence in the distribution of raw unemployment rates, for all our metrics. However, once we condition on a set of covariates, most of the spatial autocorrelation is eliminated, with the exception of physical and occupational distance. Racial and ethnic composition variables are the single most important factor in explaining the observed correlation patterns.",
"title": ""
},
{
"docid": "78e561cfb2578cc9d5634f008a4e6c7e",
"text": "The TCP transport layer protocol is designed for connections that traverse a single path between the sender and receiver. However, there are several environments in which multiple paths can be used by a connection simultaneously. In this paper we consider the problem of supporting striped connections that operate over multiple paths. We propose an end-to-end transport layer protocol called pTCP that allows connections to enjoy the aggregate bandwidths offered by the multiple paths, irrespective of the individual characteristics of the paths. We show that pTCP can have a varied range of applications through instantiations in three different environments: (a) bandwidth aggregation on multihomed mobile hosts, (b) service differentiation using purely end-to-end mechanisms, and (c) end-systems based network striping. In each of the applications we demonstrate the applicability of pTCP and how its efficacy compares with existing approaches through simulation results.",
"title": ""
},
{
"docid": "42d3adba03f835f120404cfe7571a532",
"text": "This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world.",
"title": ""
},
{
"docid": "16cac565c6163db83496c41ea98f61f9",
"text": "The rapid increase in multimedia data transmission over the Internet necessitates the multi-modal summarization (MMS) from collections of text, image, audio and video. In this work, we propose an extractive multi-modal summarization method that can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic. The key idea is to bridge the semantic gaps between multi-modal content. For audio information, we design an approach to selectively use its transcription. For visual information, we learn the joint representations of text and images using a neural network. Finally, all of the multimodal aspects are considered to generate the textual summary by maximizing the salience, non-redundancy, readability and coverage through the budgeted optimization of submodular functions. We further introduce an MMS corpus in English and Chinese, which is released to the public1. The experimental results obtained on this dataset demonstrate that our method outperforms other competitive baseline methods.",
"title": ""
},
{
"docid": "c01fbc8bd278b06e0476c6fbffca0ad1",
"text": "Memristors can be optimally used to implement logic circuits. In this paper, a logic circuit based on Memristor Ratioed Logic (MRL) is proposed. Specifically, a hybrid CMOS-memristive logic family by a suitable combination of 4 memristor and a complementary inverter CMOS structure is presented. The proposed structure by having outputs of AND, OR and XOR gates of inputs at the same time, reducing the area and connections and fewer power consumption can be appropriate for implementation of more complex circuits. Circuit design of a single-bit Full Adder is considered as a case study. The Full Adder proposed is implemented using 10 memristors and 4 transistors comparing to 18 memristors and 8 transistors in the other related work.",
"title": ""
},
{
"docid": "b44d6d71650fc31c643ac00bd45772cd",
"text": "We give in this paper a complete description of the Knuth-Bendix completion algorithm. We prove its correctness in full, isolating carefully the essential abstract notions, so that the proof may be extended to other versions and extensions of the basic algorithm. We show that it defines a semidecision algorithm for the validity problem in the equational theories for which it applies, yielding a decision procedure whenever the algorithm terminates.",
"title": ""
},
{
"docid": "45faf47f5520a4f21719f5169334aabb",
"text": "Many dynamic-content online services are comprised of multiple interacting components and data partitions distributed across server clusters. Understanding the performance of these services is crucial for efficient system management. This paper presents a profile-driven performance model for cluster-based multi-component online services. Our offline constructed application profiles characterize component resource needs and inter-component communications. With a given component placement strategy, the application profile can be used to predict system throughput and average response time for the online service. Our model differentiates remote invocations from fast-path calls between co-located components and we measure the network delay caused by blocking inter-component communications. Validation with two J2EE-based online applications show that our model can predict application performance with small errors (less than 13% for throughput and less than 14% for the average response time). We also explore how this performance model can be used to assist system management functions for multi-component online services, with case examinations on optimized component placement, capacity planning, and cost-effectiveness analysis.",
"title": ""
},
{
"docid": "ea7acc555f2cb2de898a3706c31006db",
"text": "Securing the supply chain of integrated circuits is of utmost importance to computer security. In addition to counterfeit microelectronics, the theft or malicious modification of designs in the foundry can result in catastrophic damage to critical systems and large projects. In this letter, we describe a 3-D architecture that splits a design into two separate tiers: one tier that contains critical security functions is manufactured in a trusted foundry; another tier is manufactured in an unsecured foundry. We argue that a split manufacturing approach to hardware trust based on 3-D integration is viable and provides several advantages over other approaches.",
"title": ""
},
{
"docid": "103f4ff03cc1aef7c173b36ccc33e680",
"text": "Wireless environments are typically characterized by unpredictable and unreliable channel conditions. In such environments, fragmentation of network-bound data is a commonly adapted technique to improve the probability of successful data transmissions and reduce the energy overheads incurred due to re-transmissions. The overall latencies involved with fragmentation and consequent re-assembly of fragments are often neglected which bear significant effects on the real-time guarantees of the participating applications. This work studies the latencies introduced as a result of the fragmentation performed at the link layer (MAC layer in IEEE 802.11) of the source device and their effects on end-to-end delay constraints of mobile applications (e.g., media streaming). Based on the observed effects, this work proposes a feedback-based adaptive approach that chooses an optimal fragment size to (a) satisfy end-to-end delay requirements of the distributed application and (b) minimize the energy consumption of the source device by increasing the probability of successful transmissions, thereby reducing re-transmissions and their associated costs.",
"title": ""
},
{
"docid": "1cb5a2d9abde060ba4f004fac84ca9ca",
"text": "To reach a real-time stereo vision in embedded systems, we propose in this paper, the adaptation and optimization of the well-known Disparity Space Image (DSI) on a single FPGA(Field programmable gate Arrays) that is designed for high efficiency when realized in hardware. An initial disparity map was calculated using the DSI structure and then a median filter was applied to smooth the disparity map. Many methods reported in the literature are mainly restricted to implement the SAD algorithm (Sum of Absolute Differences) on an FPGA. An evaluation of our method is done by comparing the obtained results of our method with a very fast and well-known sum of absolute differences algorithm using hardware-based implementations.",
"title": ""
},
{
"docid": "5948f08c1ca41b7024a4f7c0b2a99e5b",
"text": "Nowadays, neural networks play an important role in the task of relation classification. By designing different neural architectures, researchers have improved the performance to a large extent, compared with traditional methods. However, existing neural networks for relation classification are usually of shallow architectures (e.g., one-layer convolution neural networks or recurrent networks). They may fail to explore the potential representation space in different abstraction levels. In this paper, we propose deep recurrent neural networks (DRNNs) to tackle this challenge. Further, we propose a data augmentation method by leveraging the directionality of relations. We evaluate our DRNNs on the SemEval-2010 Task 8, and achieve an F1score of 85.81%, outperforming state-of-theart recorded results.",
"title": ""
},
{
"docid": "698dfa061afb89ac4dc768ec7a68ff1a",
"text": "a r t i c l e i n f o Social network sites such as Facebook give off the impression that others are doing better than we are. As a result, the use of these sites may lead to negative social comparison (i.e., feeling like others are doing better than oneself). According to social comparison theory, such negative social comparisons are detrimental to perceptions about the self. The current study therefore investigated the indirect relationship between Facebook use and self-perceptions through negative social comparison. Because happier people process social information differently than unhappier people, we also investigated whether the relationship between Facebook use and social comparison and, as a result, self-perception, differs depending on the degree of happiness of the emerging adult. A survey among 231 emerging adults (age 18–25) showed that Facebook use was related to a greater degree of negative social comparison, which was in turn related negatively to self-perceived social competence and physical attractiveness. The indirect relationship between Facebook use and self-perception through negative social comparison was attenuated among happier individuals, as the relationship between Facebook use and negative social comparison was weaker among happier individuals. SNS use was thus negatively related to self-perception through negative social comparison, especially among unhappy individuals. Social network sites (SNSs), such as Facebook, are notorious for giving off the impression that other people are living better lives than we are (Chou & Edge, 2012). People generally present themselves and their lives positively on SNSs (Dorethy, Fiebert, & Warren, 2014) for example by posting pictures in which they look their best (Manago, Graham, Greenfield, & Salimkhan, 2008) and are having a good time with their friends (Zhao, Grasmuck, & Martin, 2008). The vast majority of time spent on SNSs consists of viewing these idealized SNS profiles, pictures, and status updates of others (Pempek, Yermolayeva, & Calvert, 2009). Such information about how others are doing may impact how people see themselves, that is, their self-perceptions because people base their self-perceptions at least partly on how they are doing in comparison to others (Festinger, 1954). These potential effects of SNS use on self-perceptions through social comparison are the focus of the current study. Previous research on the effects of SNSs on self-perceptions has focused predominantly on the implications of social interactions on these websites (e.g., feedback from others) (Valkenburg, Peter, & Schouten, 2006) or due to editing and viewing content about the self …",
"title": ""
}
] | scidocsrr |
d16838b5f73debd581035e2fda1acd49 | Multimedia Data Management for Disaster Situation Awareness | [
{
"docid": "ca19a74fde1b9e3a0ab76995de8b0f36",
"text": "Sensors on (or attached to) mobile phones can enable attractive sensing applications in different domains, such as environmental monitoring, social networking, healthcare, transportation, etc. We introduce a new concept, sensing as a service (S2aaS), i.e., providing sensing services using mobile phones via a cloud computing system. An S2aaS cloud needs to meet the following requirements: 1) it must be able to support various mobile phone sensing applications on different smartphone platforms; 2) it must be energy-efficient; and 3) it must have effective incentive mechanisms that can be used to attract mobile users to participate in sensing activities. In this vision paper, we identify unique challenges of designing and implementing an S2aaS cloud, review existing systems and methods, present viable solutions, and point out future research directions.",
"title": ""
},
{
"docid": "40ebf37907d738dd64b5a87b93b4a432",
"text": "Deep learning has led to many breakthroughs in machine perception and data mining. Although there are many substantial advances of deep learning in the applications of image recognition and natural language processing, very few work has been done in video analysis and semantic event detection. Very deep inception and residual networks have yielded promising results in the 2014 and 2015 ILSVRC challenges, respectively. Now the question is whether these architectures are applicable to and computationally reasonable in a variety of multimedia datasets. To answer this question, an efficient and lightweight deep convolutional network is proposed in this paper. This network is carefully designed to decrease the depth and width of the state-of-the-art networks while maintaining the high-performance. The proposed deep network includes the traditional convolutional architecture in conjunction with residual connections and very light inception modules. Experimental results demonstrate that the proposed network not only accelerates the training procedure, but also improves the performance in different multimedia classification tasks.",
"title": ""
},
{
"docid": "f8c1654abd0ffced4b5dbf3ef0724d36",
"text": "The proposed social media crisis mapping platform for natural disasters uses locations from gazetteer, street map, and volunteered geographic information (VGI) sources for areas at risk of disaster and matches them to geoparsed real-time tweet data streams. The authors use statistical analysis to generate real-time crisis maps. Geoparsing results are benchmarked against existing published work and evaluated across multilingual datasets. Two case studies compare five-day tweet crisis maps to official post-event impact assessment from the US National Geospatial Agency (NGA), compiled from verified satellite and aerial imagery sources.",
"title": ""
}
] | [
{
"docid": "72a44b022df79077d6c5f4dd472b9fe9",
"text": "The minimal state of consciousness is sentience. This includes any phenomenal sensory experience - exteroceptive, such as vision and olfaction; interoceptive, such as pain and hunger; or proprioceptive, such as the sense of bodily position and movement. We propose unlimited associative learning (UAL) as the marker of the evolutionary transition to minimal consciousness (or sentience), its phylogenetically earliest sustainable manifestation and the driver of its evolution. We define and describe UAL at the behavioral and functional level and argue that the structural-anatomical implementations of this mode of learning in different taxa entail subjective feelings (sentience). We end with a discussion of the implications of our proposal for the distribution of consciousness in the animal kingdom, suggesting testable predictions, and revisiting the ongoing debate about the function of minimal consciousness in light of our approach.",
"title": ""
},
{
"docid": "a8435865938b123b2406ced69e50aad5",
"text": "In this paper, we describe the textual linguistic resources in nearly 3 dozen languages being produced by Linguistic Data Consortium for DARPA’s LORELEI (Low Resource Languages for Emergent Incidents) Program. The goal of LORELEI is to improve the performance of human language technologies for low-resource languages and enable rapid re-training of such technologies for new languages, with a focus on the use case of deployment of resources in sudden emergencies such as natural disasters. Representative languages have been selected to provide broad typological coverage for training, and surprise incident languages for testing will be selected over the course of the program. Our approach treats the full set of language packs as a coherent whole, maintaining LORELEI-wide specifications, tag sets and guidelines, while allowing for adaptation to the specific needs created by each language. Each representative language corpus, therefore, both stands on its own as a resource for the specific language and forms part of a large multilingual resource for broader cross-language technology development.",
"title": ""
},
{
"docid": "211e7ba5b2a7a690d52efa8c8d3b9453",
"text": "A new leaky-wave antenna based on substrate integrated waveguide (SIW) with H-shaped slots is proposed and investigated. The SIW with H-shaped slots is analyzed as a rectangular waveguide with H-shaped slots. Using an aperture magnetic field integral equation (MFIE), it is found that the SIW with H-shaped slots supports a leaky waveguide mode, a surface-wave mode and a proper waveguide mode depending on frequency. The radiation property is also evaluated using the MFIE. The leaky-wave antenna based on the SIW with H-shaped slots has a narrow beam that scans from broadside to endfire with frequency. The antenna produces an elliptical polarization, including circular polarization. Measured results are consistent with the simulation from HFSS and the theoretical analysis from the MFIE.",
"title": ""
},
{
"docid": "47084e8587696dc9d392d895a99ddb83",
"text": "We present an online approach to efficiently and simultaneously detect and track the 2D pose of multiple people in a video sequence. We build upon Part Affinity Field (PAF) representation designed for static images, and propose an architecture that can encode and predict Spatio-Temporal Affinity Fields (STAF) across a video sequence. In particular, we propose a novel temporal topology cross-linked across limbs which can consistently handle body motions of a wide range of magnitudes. Additionally, we make the overall approach recurrent in nature, where the network ingests STAF heatmaps from previous frames and estimates those for the current frame. Our approach uses only online inference and tracking, and is currently the fastest and the most accurate bottom-up approach that is runtime invariant to the number of people in the scene and accuracy invariant to input frame rate of camera. Running at ∼30 fps on a single GPU at single scale, it achieves highly competitive results on the PoseTrack benchmarks. 1",
"title": ""
},
{
"docid": "e9b574499c8a5395b2b7e5615eb60394",
"text": "The lateral geniculate nucleus is the best understood thalamic relay and serves as a model for all thalamic relays. Only 5-10% of the input to geniculate relay cells derives from the retina, which is the driving input. The rest is modulatory and derives from local inhibitory inputs, descending inputs from layer 6 of the visual cortex, and ascending inputs from the brainstem. These modulatory inputs control many features of retinogeniculate transmission. One such feature is the response mode, burst or tonic, of relay cells, which relates to the attentional demands at the moment. This response mode depends on membrane potential, which is controlled effectively by the modulator inputs. The lateral geniculate nucleus is a first-order relay, because it relays subcortical (i.e. retinal) information to the cortex for the first time. By contrast, the other main thalamic relay of visual information, the pulvinar region, is largely a higher-order relay, since much of it relays information from layer 5 of one cortical area to another. All thalamic relays receive a layer-6 modulatory input from cortex, but higher-order relays in addition receive a layer-5 driver input. Corticocortical processing may involve these corticothalamocortical 're-entry' routes to a far greater extent than previously appreciated. If so, the thalamus sits at an indispensable position for the modulation of messages involved in corticocortical processing.",
"title": ""
},
{
"docid": "35fbdf776186afa7d8991fa4ff22503d",
"text": "Lang Linguist Compass 2016; 10: 701–719 wileyo Abstract Research and industry are becoming more and more interested in finding automatically the polarised opinion of the general public regarding a specific subject. The advent of social networks has opened the possibility of having access to massive blogs, recommendations, and reviews. The challenge is to extract the polarity from these data, which is a task of opinion mining or sentiment analysis. The specific difficulties inherent in this task include issues related to subjective interpretation and linguistic phenomena that affect the polarity of words. Recently, deep learning has become a popular method of addressing this task. However, different approaches have been proposed in the literature. This article provides an overview of deep learning for sentiment analysis in order to place these approaches in context.",
"title": ""
},
{
"docid": "3e36f9b6ad8ff66c070dd65306a82333",
"text": "The topic of representation, recovery and manipulation of three-dimensional (3D) scenes from two-dimensional (2D) images thereof, provides a fertile ground for both intellectual theoretically inclined questions related to the algebra and geometry of the problem and to practical applications such as Visual Recognition, Animation and View Synthesis, recovery of scene structure and camera ego-motion, object detection and tracking, multi-sensor alignment, etc. The basic materials have been known since the turn of the century, but the full scope of the problem has been under intensive study since 1992, rst on the algebra of two views and then on the algebra of multiple views leading to a relatively mature understanding of what is known as \\multilinear matching constraints\", and the \\trilinear tensor\" of three or more views. The purpose of this paper is, rst and foremost, to provide a coherent framework for expressing the ideas behind the analysis of multiple views. Secondly, to integrate the various incremental results that have appeared on the subject into one coherent manuscript.",
"title": ""
},
{
"docid": "ef74392a9681d16b14970740cbf85191",
"text": "We propose an efficient physics-based method for dexterous ‘real hand’ - ‘virtual object’ interaction in Virtual Reality environments. Our method is based on the Coulomb friction model, and we show how to efficiently implement it in a commodity VR engine for realtime performance. This model enables very convincing simulations of many types of actions such as pushing, pulling, grasping, or even dexterous manipulations such as spinning objects between fingers without restrictions on the objects' shapes or hand poses. Because it is an analytic model, we do not require any prerecorded data, in contrast to previous methods. For the evaluation of our method, we conduction a pilot study that shows that our method is perceived more realistic and natural, and allows for more diverse interactions. Further, we evaluate the computational complexity of our method to show real-time performance in VR environments.",
"title": ""
},
{
"docid": "b3fd58901706f7cb3ed653572e634c78",
"text": "This paper presents visual analysis of eye state and head pose (HP) for continuous monitoring of alertness of a vehicle driver. Most existing approaches to visual detection of nonalert driving patterns rely either on eye closure or head nodding angles to determine the driver drowsiness or distraction level. The proposed scheme uses visual features such as eye index (EI), pupil activity (PA), and HP to extract critical information on nonalertness of a vehicle driver. EI determines if the eye is open, half closed, or closed from the ratio of pupil height and eye height. PA measures the rate of deviation of the pupil center from the eye center over a time period. HP finds the amount of the driver's head movements by counting the number of video segments that involve a large deviation of three Euler angles of HP, i.e., nodding, shaking, and tilting, from its normal driving position. HP provides useful information on the lack of attention, particularly when the driver's eyes are not visible due to occlusion caused by large head movements. A support vector machine (SVM) classifies a sequence of video segments into alert or nonalert driving events. Experimental results show that the proposed scheme offers high classification accuracy with acceptably low errors and false alarms for people of various ethnicity and gender in real road driving conditions.",
"title": ""
},
{
"docid": "1269bdb48c686c9643f007d4aee4afea",
"text": "Hundreds of public SPARQL endpoints have been deployed on the Web, forming a novel decentralised infrastructure for querying billions of structured facts from a variety of sources on a plethora of topics. But is this infrastructure mature enough to support applications? For 427 public SPARQL endpoints registered on the DataHub, we conduct various experiments to test their maturity. Regarding discoverability, we find that only one-third of endpoints make descriptive meta-data available, making it difficult to locate or learn about their content and capabilities. Regarding interoperability, we find patchy support for established SPARQL features like ORDER BY as well as (understandably) for new SPARQL 1.1 features. Regarding efficiency, we show that the performance of endpoints for generic queries can vary by up to 3–4 orders of magnitude. Regarding availability, based on a 27-month long monitoring experiment, we show that only 32.2% of public endpoints can be expected to have (monthly) “two-nines” uptimes of 99–100%.",
"title": ""
},
{
"docid": "06848cf456dbbcd5891cd33522ab7b75",
"text": "Credit scoring models play a fundamental role in the risk management practice at most banks. They are used to quantify credit risk at counterparty or transaction level in the different phases of the credit cycle (e.g. application, behavioural, collection models). The credit score empowers users to make quick decisions or even to automate decisions and this is extremely desirable when banks are dealing with large volumes of clients and relatively small margin of profits at individual transaction level (i.e. consumer lending, but increasingly also small business lending). In this article, we analyze the history and new developments related to credit scoring models. We find that with the new Basel Capital Accord, credit scoring models have been remotivated and given unprecedented significance. Banks, in particular, and most financial institutions worldwide, have either recently developed or modified existing internal credit risk models to conform with the new rules and best practices recently updated in the market. Moreover, we analyze the key steps of the credit scoring model’s lifecycle (i.e. assessment, implementation, validation) highlighting the main requirement imposed by Basel II. We conclude that banks that are going to implement the most advanced approach to calculate their capital requirements under Basel II will need to increase their attention and consideration of credit scoring models in the next future. JEL classification: G17; G21",
"title": ""
},
{
"docid": "72778e59443066c01142cd0d48400490",
"text": "Optimal load shedding (LS) design as an emergency plan is one of the main control challenges posed by emerging new uncertainties and numerous distributed generators including renewable energy sources in a modern power system. This paper presents an overview of the key issues and new challenges on optimal LS synthesis concerning the integration of wind turbine units into the power systems. Following a brief survey on the existing LS methods, the impact of power fluctuation produced by wind powers on system frequency and voltage performance is presented. The most LS schemas proposed so far used voltage or frequency parameter via under-frequency or under-voltage LS schemes. Here, the necessity of considering both voltage and frequency indices to achieve a more effective and comprehensive LS strategy is emphasized. Then it is clarified that this problem will be more dominated in the presence of wind turbines. Keywords— Load shedding, emergency control, voltage, frequency, wind turbine.",
"title": ""
},
{
"docid": "022f0b83e93b82dfbdf7ae5f5ebe6f8f",
"text": "Most pregnant women at risk of for infection with Plasmodium vivax live in the Asia-Pacific region. However, malaria in pregnancy is not recognised as a priority by many governments, policy makers, and donors in this region. Robust data for the true burden of malaria throughout pregnancy are scarce. Nevertheless, when women have little immunity, each infection is potentially fatal to the mother, fetus, or both. WHO recommendations for the control of malaria in pregnancy are largely based on the situation in Africa, but strategies in the Asia-Pacific region are complicated by heterogeneous transmission settings, coexistence of multidrug-resistant Plasmodium falciparum and Plasmodium vivax parasites, and different vectors. Most knowledge of the epidemiology, effect, treatment, and prevention of malaria in pregnancy in the Asia-Pacific region comes from India, Papua New Guinea, and Thailand. Improved estimates of the morbidity and mortality of malaria in pregnancy are urgently needed. When malaria in pregnancy cannot be prevented, accurate diagnosis and prompt treatment are needed to avert dangerous symptomatic disease and to reduce effects on fetuses.",
"title": ""
},
{
"docid": "195f162d6525b7cb2891ee57afb88c49",
"text": "Over the last several years, the field of natural language processing has been propelled forward by an explosion in the use of deep learning models. This survey provides a brief introduction to the field and a quick overview of deep learning architectures and methods. It then sifts through the plethora of recent studies and summarizes a large assortment of relevant contributions. Analyzed research areas include several core linguistic processing issues in addition to a number of applications of computational linguistics. A discussion of the current state of the art is then provided along with recommendations for future research in the field.",
"title": ""
},
{
"docid": "ab97caed9c596430c3d76ebda55d5e6e",
"text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.",
"title": ""
},
{
"docid": "2fd2553400cacc4dcb489460a7493dcb",
"text": "Trustworthy generation of public random numbers is necessary for the security of a number of cryptographic applications. It was suggested to use the inherent unpredictability of blockchains as a source of public randomness. Entropy from the Bitcoin blockchain in particular has been used in lotteries and has been suggested for a number of other applications ranging from smart contracts to election auditing. In this Arcticle, we analyse this idea and show how an adversary could manipulate these random numbers, even with limited computational power and financial budget.",
"title": ""
},
{
"docid": "bdda2d3eef1a5040d626419c10f18d36",
"text": "This paper presents a novel hybrid permanent magnet and wound field synchronous machine geometry with a displaced reluctance axis. This concept is known for improving motor operation performance and efficiency at the cost of an inferior generator operation. To overcome this disadvantage, the proposed machine geometry is capable of inverting the magnetic asymmetry dynamically. Thereby, the positive effects of the magnetic asymmetry can be used in any operation point. This paper examines the theoretical background and shows the benefits of this geometry by means of simulation and measurement. The prototype achieves an increase in torque of 4 % and an increase in efficiency of 2 percentage points over a conventional electrically excited synchronous machine.",
"title": ""
},
{
"docid": "a7c07c3ab577bc8c5cd2930a2c58c5e0",
"text": "Convolutional neural networks have been widely applied in many low level vision tasks. In this paper, we propose a video super-resolution (SR) method named enhanced video SR network with residual blocks (EVSR). The proposed EVSR fully exploits spatio-temporal information and can implicitly capture motion relations between consecutive frames. Therefore, unlike conventional methods to video SR, EVSR does not require an explicit motion compensation process. In addition, residual learning framework exhibits excellence in convergence rate and performance improvement. Based on this, residual blocks and long skip-connection with dimension adjustment layer are proposed to predict high-frequency details. Extensive experiments validate the superiority of our approach over state-of-the-art algorithms.",
"title": ""
},
{
"docid": "a49e6ada59d6495e1378d868779d2e32",
"text": "Configurable software verification is a recent concept for expressing different program analysis and model checking approaches in one single formalism. This paper presents CPAchecker, a tool and framework that aims at easy integration of new verification components. Every abstract domain, together with the corresponding operations, is required to implement the interface of configurable program analysis (CPA). The main algorithm is configurable to perform a reachability analysis on arbitrary combinations of existing CPAs. The major design goal during the development was to provide a framework for developers that is flexible and easy to extend. We hope that researchers find it convenient and productive to implement new verification ideas and algorithms using this platform and that it advances the field by making it easier to perform practical experiments. The tool is implemented in Java and runs as command-line tool or as Eclipse plug-in. We evaluate the efficiency of our tool on benchmarks from the software model checker Blast. The first released version of CPAchecker implements CPAs for predicate abstraction, octagon, and explicit-value domains. Binaries and the source code of CPAchecker are publicly available as free software.",
"title": ""
},
{
"docid": "53142f7afb27dd14ed28228014661658",
"text": "BACKGROUND\nNodular hidradenoma is an uncommon, benign, adnexal neoplasm of apocrine origin which is a clinical simulator of other tumours.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the morphological findings of a large series of nodular hidradenomas under dermoscopic observation.\n\n\nMETHODS\nDermoscopic examination of 28 cases of nodular hidradenomas was performed to evaluate specific dermoscopic criteria and patterns.\n\n\nRESULTS\nThe most frequently occurring dermoscopic features were: (1) in 96.4% of cases, a homogeneous area that covered the lesion partially or totally, the colour of which was pinkish in 46.4% of cases, bluish in 28.6%, red-blue in 14.3%, and brownish in 10.7%; (2) white structures were found in 89.3% of cases; (3) in 82.1% of cases, vascular structures were also observed, especially arborising telangiectasias (39.3%) and polymorphous atypical vessels (28.6%).\n\n\nCONCLUSION\nNodular hidradenomas represent a dermoscopic pitfall, being difficult to differentiate clinically and dermoscopically from basal cell carcinomas and melanomas.",
"title": ""
}
] | scidocsrr |
1396859e54315d0c571f77cfd2ebec62 | HealthyTogether: exploring social incentives for mobile fitness applications | [
{
"docid": "42992bd3e26ab8b74dceb7707495d7af",
"text": "Though a variety of persuasive health applications have been designed with a preventive standpoint toward diseases in mind, many have been designed largely for a general audience. Designers of these technologies may achieve more success if applications consider an individual’s personality type. Our goal for this research was to explore the relationship between personality and persuasive technologies in the context of health-promoting mobile applications. We conducted an online survey with 240 participants using storyboards depicting eight different persuasive strategies, the Big Five Inventory for personality domains, and questions on perceptions of the persuasive technologies. Our results and analysis revealed a number of significant relationships between personality and the persuasive technologies we evaluated. The findings from this study can guide the development of persuasive technologies that can cater to individual personalities to improve the likelihood of their success.",
"title": ""
},
{
"docid": "16d949f6915cbb958cb68a26c6093b6b",
"text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.",
"title": ""
},
{
"docid": "5e7a06213a32e0265dcb8bc11a5bb3f1",
"text": "The global obesity epidemic has prompted our community to explore the potential for technology to play a stronger role in promoting healthier lifestyles. Although there are several examples of successful games based on focused physical interaction, persuasive applications that integrate into everyday life have had more mixed results. This underscores a need for designs that encourage physical activity while addressing fun, sustainability, and behavioral change. This note suggests a new perspective, inspired in part by the social nature of many everyday fitness applications and by the successful encouragement of long term play in massively multiplayer online games. We first examine the game design literature to distill a set of principles for discussing and comparing applications. We then use these principles to analyze an existing application. Finally, we present Kukini, a design for an everyday fitness game.",
"title": ""
}
] | [
{
"docid": "13c7278393988ec2cfa9a396255e6ff3",
"text": "Finding good transfer functions for rendering medical volumes is difficult, non-intuitive, and time-consuming. We introduce a clustering-based framework for the automatic generation of transfer functions for volumetric data. The system first applies mean shift clustering to oversegment the volume boundaries according to their low-high (LH) values and their spatial coordinates, and then uses hierarchical clustering to group similar voxels. A transfer function is then automatically generated for each cluster such that the number of occlusions is reduced. The framework also allows for semi-automatic operation, where the user can vary the hierarchical clustering results or the transfer functions generated. The system improves the efficiency and effectiveness of visualizing medical images and is suitable for medical imaging applications.",
"title": ""
},
{
"docid": "fe5b87cacf87c6eab9c252cef41c24d8",
"text": "The Filter Bank Common Spatial Pattern (FBCSP) algorithm employs multiple spatial filters to automatically select key temporal-spatial discriminative EEG characteristics and the Naïve Bayesian Parzen Window (NBPW) classifier using offline learning in EEG-based Brain-Computer Interfaces (BCI). However, it has yet to address the non-stationarity inherent in the EEG between the initial calibration session and subsequent online sessions. This paper presents the FBCSP that employs the NBPW classifier using online adaptive learning that augments the training data with available labeled data during online sessions. However, employing semi-supervised learning that simply augments the training data with available data using predicted labels can be detrimental to the classification accuracy. Hence, this paper presents the FBCSP using online semi-supervised learning that augments the training data with available data that matches the probabilistic model captured by the NBPW classifier using predicted labels. The performances of FBCSP using online adaptive and semi-supervised learning are evaluated on the BCI Competition IV datasets IIa and IIb and compared to the FBCSP using offline learning. The results showed that the FBCSP using online semi-supervised learning yielded relatively better session-to-session classification results compared against the FBCSP using offline learning. The FBCSP using online adaptive learning on true labels yielded the best results in both datasets, but the FBCSP using online semi-supervised learning on predicted labels is more practical in BCI applications where the true labels are not available.",
"title": ""
},
{
"docid": "046f2b6ec65903d092f8576cd210d7ee",
"text": "Aim\nThe principal study objective was to investigate the pharmacokinetic characteristics and determine the absolute bioavailability and tolerability of a new sublingual (SL) buprenorphine wafer.\n\n\nMethods\nThe study was of open label, two-way randomized crossover design in 14 fasted healthy male and female volunteers. Each participant, under naltrexone block, received either a single intravenous dose of 300 mcg of buprenorphine as a constant infusion over five minutes or a sublingual dose of 800 mcg of buprenorphine in two treatment periods separated by a seven-day washout period. Blood sampling for plasma drug assay was taken on 16 occasions throughout a 48-hour period (predose and at 10, 20, 30, and 45 minutes, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12, 24 and 48 hours postdose). The pharmacokinetic parameters were determined by noncompartmental analyses of the buprenorphine plasma concentration-time profiles. Local tolerability was assessed using modified Likert scales.\n\n\nResults\nThe absolute bioavailability of SL buprenorphine was 45.4% (95% confidence interval = 37.8-54.3%). The median times to peak plasma concentration were 10 minutes and 60 minutes after IV and SL administration, respectively. The peak plasma concentration was 2.65 ng/mL and 0.74 ng/mL after IV and SL administration, respectively. The half-lives were 9.1 hours and 11.2 hours after IV and SL administration, respectively. The wafer had very good local tolerability.\n\n\nConclusions\nThis novel sublingual buprenorphine wafer has high bioavailability and reduced Tmax compared with other SL tablet formulations of buprenorphine. The wafer displayed very good local tolerability. The results suggest that this novel buprenorphine wafer may provide enhanced clinical utility in the management of both acute and chronic pain.\n\n\nBackground\nBuprenorphine is approved for use in pain management and opioid addiction. Sublingual administration of buprenorphine is a simple and noninvasive route of administration and has been available for many years. Improved sublingual formulations may lead to increased utilization of this useful drug for acute and chronic pain management.",
"title": ""
},
{
"docid": "1b0046cbee1afd3e7471f92f115f3d74",
"text": "We present an approach to improve statistical machine translation of image descriptions by multimodal pivots defined in visual space. The key idea is to perform image retrieval over a database of images that are captioned in the target language, and use the captions of the most similar images for crosslingual reranking of translation outputs. Our approach does not depend on the availability of large amounts of in-domain parallel data, but only relies on available large datasets of monolingually captioned images, and on state-ofthe-art convolutional neural networks to compute image similarities. Our experimental evaluation shows improvements of 1 BLEU point over strong baselines.",
"title": ""
},
{
"docid": "20e504a115a1448ea366eae408b6391f",
"text": "Clustering algorithms have emerged as an alternative powerful meta-learning tool to accurately analyze the massive volume of data generated by modern applications. In particular, their main goal is to categorize data into clusters such that objects are grouped in the same cluster when they are similar according to specific metrics. There is a vast body of knowledge in the area of clustering and there has been attempts to analyze and categorize them for a larger number of applications. However, one of the major issues in using clustering algorithms for big data that causes confusion amongst practitioners is the lack of consensus in the definition of their properties as well as a lack of formal categorization. With the intention of alleviating these problems, this paper introduces concepts and algorithms related to clustering, a concise survey of existing (clustering) algorithms as well as providing a comparison, both from a theoretical and an empirical perspective. From a theoretical perspective, we developed a categorizing framework based on the main properties pointed out in previous studies. Empirically, we conducted extensive experiments where we compared the most representative algorithm from each of the categories using a large number of real (big) data sets. The effectiveness of the candidate clustering algorithms is measured through a number of internal and external validity metrics, stability, runtime, and scalability tests. In addition, we highlighted the set of clustering algorithms that are the best performing for big data.",
"title": ""
},
{
"docid": "49168ffff3d4212bc010e8085a3c2e8f",
"text": "Recent advances in the solution of nonconvex optimization problems use simulated annealing techniques that are considerably faster than exhaustive global search techniques. This letter presents a simulated annealing technique, which is t/log (t) times faster than conventional simulated annealing, and applies it to a multisensor location and tracking problem.",
"title": ""
},
{
"docid": "88e1d4f4245a4162ddd27503302ce6b4",
"text": "Using ethnographic research methods, the authors studied the structure of the needs and priorities of people working in a vineyard to gain a better understanding of the potential for sensor networks in agriculture. We discuss an extended study of vineyard workers and their work practices to assess the potential for sensor network systems to aid work in this environment. The major purpose is to find new directions and new topics that pervasive computing and sensor networks might address in designing technologies to support a broader range of users and activities.",
"title": ""
},
{
"docid": "61c68d03ed5769bf4c061ba78624cc7f",
"text": "Extant xenarthrans (armadillos, anteaters and sloths) are among the most derived placental mammals ever evolved. South America was the cradle of their evolutionary history. During the Tertiary, xenarthrans experienced an extraordinary radiation, whereas South America remained isolated from other continents. The 13 living genera are relics of this earlier diversification and represent one of the four major clades of placental mammals. Sequences of the three independent protein-coding nuclear markers alpha2B adrenergic receptor (ADRA2B), breast cancer susceptibility (BRCA1), and von Willebrand Factor (VWF) were determined for 12 of the 13 living xenarthran genera. Comparative evolutionary dynamics of these nuclear exons using a likelihood framework revealed contrasting patterns of molecular evolution. All codon positions of BRCA1 were shown to evolve in a strikingly similar manner, and third codon positions appeared less saturated within placentals than those of ADRA2B and VWF. Maximum likelihood and Bayesian phylogenetic analyses of a 47 placental taxa data set rooted by three marsupial outgroups resolved the phylogeny of Xenarthra with some evidence for two radiation events in armadillos and provided a strongly supported picture of placental interordinal relationships. This topology was fully compatible with recent studies, dividing placentals into the Southern Hemisphere clades Afrotheria and Xenarthra and a monophyletic Northern Hemisphere clade (Boreoeutheria) composed of Laurasiatheria and Euarchontoglires. Partitioned likelihood statistical tests of the position of the root, under different character partition schemes, identified three almost equally likely hypotheses for early placental divergences: a basal Afrotheria, an Afrotheria + Xenarthra clade, or a basal Xenarthra (Epitheria hypothesis). We took advantage of the extensive sampling realized within Xenarthra to assess its impact on the location of the root on the placental tree. By resampling taxa within Xenarthra, the conservative Shimodaira-Hasegawa likelihood-based test of alternative topologies was shown to be sensitive to both character and taxon sampling.",
"title": ""
},
{
"docid": "fa7682dc85d868e57527fdb3124b309c",
"text": "The seminal 2003 paper by Cosley, Lab, Albert, Konstan, and Reidl, demonstrated the susceptibility of recommender systems to rating biases. To facilitate browsing and selection, almost all recommender systems display average ratings before accepting ratings from users which has been shown to bias ratings. This effect is called Social Inuence Bias (SIB); the tendency to conform to the perceived \\norm\" in a community. We propose a methodology to 1) learn, 2) analyze, and 3) mitigate the effect of SIB in recommender systems. In the Learning phase, we build a baseline dataset by allowing users to rate twice: before and after seeing the average rating. In the Analysis phase, we apply a new non-parametric significance test based on the Wilcoxon statistic to test whether the data is consistent with SIB. If significant, we propose a Mitigation phase using polynomial regression and the Bayesian Information Criterion (BIC) to predict unbiased ratings. We evaluate our approach on a dataset of 9390 ratings from the California Report Card (CRC), a rating-based system designed to encourage political engagement. We found statistically significant evidence of SIB. Mitigating models were able to predict changed ratings with a normalized RMSE of 12.8% and reduce bias by 76.3%. The CRC, our data, and experimental code are available at: http://californiareportcard.org/data/",
"title": ""
},
{
"docid": "b46b8dd33cf82d82d41f501ea87ebfc1",
"text": "Repetition is a core principle in music. This is especially true for popular songs, generally marked by a noticeable repeating musical structure, over which the singer performs varying lyrics. On this basis, we propose a simple method for separating music and voice, by extraction of the repeating musical structure. First, the period of the repeating structure is found. Then, the spectrogram is segmented at period boundaries and the segments are averaged to create a repeating segment model. Finally, each time-frequency bin in a segment is compared to the model, and the mixture is partitioned using binary time-frequency masking by labeling bins similar to the model as the repeating background. Evaluation on a dataset of 1,000 song clips showed that this method can improve on the performance of an existing music/voice separation method without requiring particular features or complex frameworks.",
"title": ""
},
{
"docid": "36248b57ff386a6e316b7c8273e351d0",
"text": "Mental stress has become a social issue and could become a cause of functional disability during routine work. In addition, chronic stress could implicate several psychophysiological disorders. For example, stress increases the likelihood of depression, stroke, heart attack, and cardiac arrest. The latest neuroscience reveals that the human brain is the primary target of mental stress, because the perception of the human brain determines a situation that is threatening and stressful. In this context, an objective measure for identifying the levels of stress while considering the human brain could considerably improve the associated harmful effects. Therefore, in this paper, a machine learning (ML) framework involving electroencephalogram (EEG) signal analysis of stressed participants is proposed. In the experimental setting, stress was induced by adopting a well-known experimental paradigm based on the montreal imaging stress task. The induction of stress was validated by the task performance and subjective feedback. The proposed ML framework involved EEG feature extraction, feature selection (receiver operating characteristic curve, t-test and the Bhattacharya distance), classification (logistic regression, support vector machine and naïve Bayes classifiers) and tenfold cross validation. The results showed that the proposed framework produced 94.6% accuracy for two-level identification of stress and 83.4% accuracy for multiple level identification. In conclusion, the proposed EEG-based ML framework has the potential to quantify stress objectively into multiple levels. The proposed method could help in developing a computer-aided diagnostic tool for stress detection.",
"title": ""
},
{
"docid": "0286fb17d9ddb18fb25152c7e5b943c4",
"text": "Treemaps are a well known method for the visualization of attributed hierarchical data. Previously proposed treemap layout algorithms are limited to rectangular shapes, which cause problems with the aspect ratio of the rectangles as well as with identifying the visualized hierarchical structure. The approach of Voronoi treemaps presented in this paper eliminates these problems through enabling subdivisions of and in polygons. Additionally, this allows for creating treemap visualizations within areas of arbitrary shape, such as triangles and circles, thereby enabling a more flexible adaptation of treemaps for a wider range of applications.",
"title": ""
},
{
"docid": "29479201c12e99eb9802dd05cff60c36",
"text": "Exposures to air pollution in the form of particulate matter (PM) can result in excess production of reactive oxygen species (ROS) in the respiratory system, potentially causing both localized cellular injury and triggering a systemic inflammatory response. PM-induced inflammation in the lung is modulated in large part by alveolar macrophages and their biochemical signaling, including production of inflammatory cytokines, the primary mechanism via which inflammation is initiated and sustained. We developed a robust, relevant, and flexible method employing a rat alveolar macrophage cell line (NR8383) which can be applied to routine samples of PM from air quality monitoring sites to gain insight into the drivers of PM toxicity that lead to oxidative stress and inflammation. Method performance was characterized using extracts of ambient and vehicular engine exhaust PM samples. Our results indicate that the reproducibility and the sensitivity of the method are satisfactory and comparisons between PM samples can be made with good precision. The average relative percent difference for all genes detected during 10 different exposures was 17.1%. Our analysis demonstrated that 71% of genes had an average signal to noise ratio (SNR) ≥ 3. Our time course study suggests that 4 h may be an optimal in vitro exposure time for observing short-term effects of PM and capturing the initial steps of inflammatory signaling. The 4 h exposure resulted in the detection of 57 genes (out of 84 total), of which 86% had altered expression. Similarities and conserved gene signaling regulation among the PM samples were demonstrated through hierarchical clustering and other analyses. Overlying the core congruent patterns were differentially regulated genes that resulted in distinct sample-specific gene expression \"fingerprints.\" Consistent upregulation of Il1f5 and downregulation of Ccr7 was observed across all samples, while TNFα was upregulated in half of the samples and downregulated in the other half. Overall, this PM-induced cytokine expression assay could be effectively integrated into health studies and air quality monitoring programs to better understand relationships between specific PM components, oxidative stress activity and inflammatory signaling potential.",
"title": ""
},
{
"docid": "b9717a3ce0ed7245621314ba3e1ce251",
"text": "Analog beamforming with phased arrays is a promising technique for 5G wireless communication at millimeter wave frequencies. Using a discrete codebook consisting of multiple analog beams, each beam focuses on a certain range of angles of arrival or departure and corresponds to a set of fixed phase shifts across frequency due to practical hardware considerations. However, for sufficiently large bandwidth, the gain provided by the phased array is actually frequency dependent, which is an effect called beam squint, and this effect occurs even if the radiation pattern of the antenna elements is frequency independent. This paper examines the nature of beam squint for a uniform linear array (ULA) and analyzes its impact on codebook design as a function of the number of antennas and system bandwidth normalized by the carrier frequency. The criterion for codebook design is to guarantee that each beam's minimum gain for a range of angles and for all frequencies in the wideband system exceeds a target threshold, for example 3 dB below the array's maximum gain. Analysis and numerical examples suggest that a denser codebook is required to compensate for beam squint. For example, 54% more beams are needed compared to a codebook design that ignores beam squint for a ULA with 32 antennas operating at a carrier frequency of 73 GHz and bandwidth of 2.5 GHz. Furthermore, beam squint with this design criterion limits the bandwidth or the number of antennas of the array if the other one is fixed.",
"title": ""
},
{
"docid": "81487ff9bc7cc46035d88848f9d41419",
"text": "This paper proposes a method for classifying the type of lexical-semantic relation between a given pair of words. Given an inventory of target relationships, this task can be seen as a multi-class classification problem. We train a supervised classifier by assuming that a specific type of lexical-semantic relation between a pair of words would be signaled by a carefully designed set of relation-specific similarities between the words. These similarities are computed by exploiting “sense representations” (sense/concept embeddings). The experimental results show that the proposed method clearly outperforms an existing state-of-the-art method that does not utilize sense/concept embeddings, thereby demonstrating the effectiveness of the sense representations.",
"title": ""
},
{
"docid": "2b123076b5d3e848916cd33a9c6321d0",
"text": "This paper proposes a new isolated bridgeless AC-DC power factor correction (PFC) converter. The proposed circuit consists of dual flyback converters which provide high power factor (PF). By eliminating input bridge diodes, the number of conducting components is reduced. Therefore, conduction losses are decreased and efficiency can be further improved. Critical conduction mode (CRM) operation decreases the switching losses of switch components. Thus, operational modes of CRM are analyzed and sensing configurations are also presented to address some of the challenge points such as zero crossing detection (ZCD) circuit and sensing circuits of the bridgeless converter. Using a transformer allows for more flexible voltage gain design and, thus, a single-stage isolated PFC. The proposed circuit is verified with a 75W (12V/6.4A) experimental prototype in discontinuous conduction mode (DCM) and CRM.",
"title": ""
},
{
"docid": "d1ff3f763fac877350d402402b29323c",
"text": "The study of microstrip patch antennas has made great progress in recent years. Compared with conventional antennas, microstrip patch antennas have more advantages and better prospects. They are lighter in weight, low volume, low cost, low profile, smaller in dimension and ease of fabrication and conformity. Moreover, the microstrip patch antennas can provide dual and circular polarizations, dual-frequency operation, frequency agility, broad band-width, feedline flexibility, beam scanning omnidirectional patterning. In this paper we discuss the microstrip antenna, types of microstrip antenna, feeding techniques and application of microstrip patch antenna with their advantage and disadvantages over conventional microwave antennas.",
"title": ""
},
{
"docid": "08a75a1b6643d0aedcd3419b7ac143b2",
"text": "Traditional image coding standards (such as JPEG and JPEG2000) make the decoded image suffer from many blocking artifacts or noises since the use of big quantization steps. To overcome this problem, we proposed an end-to-end compression framework based on two CNNs, as shown in Figure 1, which produce a compact representation for encoding using a third party coding standard and reconstruct the decoded image, respectively. To make two CNNs effectively collaborate, we develop a unified end-to-end learning framework to simultaneously learn CrCNN and ReCNN such that the compact representation obtained by CrCNN preserves the structural information of the image, which facilitates to accurately reconstruct the decoded image using ReCNN and also makes the proposed compression framework compatible with existing image coding standards.",
"title": ""
},
{
"docid": "2fe11bee56ecafabeb24c69aae63f8cb",
"text": "Enabled by virtualization technologies, various multi-tier applications (such as web applications) are hosted by virtual machines (VMs) in cloud data centers. Live migration of multi-tier applications across geographically distributed data centers is important for load management, power saving, routine server maintenance and quality-of-service. Different from a single-VM migration, VMs in a multi-tier application are closely correlated, which results in a correlated VM migrations problem. Current live migration algorithms for single-VM cause significant application performance degradation because intermediate data exchange between different VMs suffers relatively low bandwidth and high latency across distributed data centers. In this paper, we design and implement a coordination system called VMbuddies for correlated VM migrations in the cloud. Particularly, we propose an adaptive network bandwidth allocation algorithm to minimize the migration cost in terms of migration completion time, network traffic and migration downtime. Experiments using a public benchmark show that VMbuddies significantly reduces the performance degradation and migration cost of multi-tier applications.",
"title": ""
}
] | scidocsrr |
5f5e5ed15f27f9b81efeadaaa9302ad9 | Design of Resistive Loading Vivaldi Antenna | [
{
"docid": "c8f2aaa7c7aa874e4578005ad8b219c4",
"text": "The geometrical and electrical features of the Vivaldi antenna are studied in the light of the frequency-independent antenna theory. A scaling principle is derived for the exponential tapering of the antenna, and a closed-form model for the current distribution is provided. Such theoretical results are in good agreement with several numerical simulations performed by using the NEC2 code. Furthermore, a practical feeding system, based on a double-Y balun, is developed and tested to obtain a more systematic approach to the design of the aforesaid antennas",
"title": ""
}
] | [
{
"docid": "17bb5a93e1c5e95b2ac043c25e1542d2",
"text": "In this paper, Design and development of control circuit algorithm of 3-Φ Voltage Source Inverter (VSI) using Unipolar Sinusoidal Pulse Width Modulation (SPWM) techniques is proposed using STM32F407VGTx discovery board interfaced with MATLAB Simulink Environment. For implementing 3-Φ VSI, this method is most useful as it will eliminate complexity of control circuit algorithm occurred due to analog circuit, reduces cost, size of filter required, and improved overall inverter efficiency. Pulse width modulated voltage source inverter is preferred for high performance AC drives. The proposed control scheme and the result obtained by embedded simulations interfaced with STM32F4 microcontroller are presented and discussed. A simple hardware implementation is proposed and experimental results of SPWM based VSI are also shown. The Simulation results of the proposed scheme have excellent performance and compared with hardware results.",
"title": ""
},
{
"docid": "b8124460ac2eeab0a5afa88ba6f92804",
"text": "Evidence from diverse literatures supports the viewpoint that two modes of self-regulation exist, a lower-order system that responds quickly to associative cues of the moment and a higher-order system that responds more reflectively and planfully; that low serotonergic function is linked to relative dominance of the lower-order system; that how dominance of the lower-order system is manifested depends on additional variables; and that low serotonergic function therefore can promote behavioral patterns as divergent as impulsive aggression and lethargic depression. Literatures reviewed include work on two-mode models; studies of brain function supporting the biological plausibility of the two-mode view and the involvement of serotonergic pathways in functions pertaining to it; and studies relating low serotonergic function to impulsiveness, aggression (including extreme violence), aspects of personality, and depression vulnerability. Substantial differences between depression and other phenomena reviewed are interpreted by proposing that depression reflects both low serotonergic function and low reward sensitivity. The article closes with brief consideration of the idea that low serotonergic function relates to even more diverse phenomena, whose natures depend in part on sensitivities of other systems.",
"title": ""
},
{
"docid": "98fec87d72f6247e1a8baa1a07a41c70",
"text": "As multicast applications are deployed for mainstream use, the need to secure multicast communications will become critical. Multicast, however, does not fit the point-to-point model of most network security protocols which were designed with unicast communications in mind. As we will show, securing multicast (or group) communications is fundamentally different from securing unicast (or paired) communications. In turn, these differences can result in scalability problems for many typical applications.In this paper, we examine and model the differences between unicast and multicast security and then propose Iolus: a novel framework for scalable secure multicasting. Protocols based on Iolus can be used to achieve a variety of security objectives and may be used either to directly secure multicast communications or to provide a separate group key management service to other \"security-aware\" applications. We describe the architecture and operation of Iolus in detail and also describe our experience with a protocol based on the Iolus framework.",
"title": ""
},
{
"docid": "1f7fa34fd7e0f4fd7ff9e8bba2a78e3c",
"text": "Today many multi-national companies or organizations are adopting the use of automation. Automation means replacing the human by intelligent robots or machines which are capable to work as human (may be better than human). Artificial intelligence is a way of making machines, robots or software to think like human. As the concept of artificial intelligence is use in robotics, it is necessary to understand the basic functions which are required for robots to think and work like human. These functions are planning, acting, monitoring, perceiving and goal reasoning. These functions help robots to develop its skills and implement it. Since robotics is a rapidly growing field from last decade, it is important to learn and improve the basic functionality of robots and make it more useful and user-friendly.",
"title": ""
},
{
"docid": "49fbf351432fe37c92f59462fd03aaad",
"text": "As compared to simple actions, activities are much more complex, but semantically consistent with a human’s real life. Techniques for action recognition from sensor generated data are mature. However, there has been relatively little work on bridging the gap between actions and activities. To this end, this paper presents a novel approach for complex activity recognition comprising of two components. The first component is temporal pattern mining, which provides a midlevel feature representation for activities, encodes temporal relatedness among actions, and captures the intrinsic properties of activities. The second component is adaptive Multi-Task Learning, which captures relatedness among activities and selects discriminant features. Extensive experiments on a real-world dataset demonstrate the effectiveness of our work.",
"title": ""
},
{
"docid": "506a6a98e87fb5a6dc7e5cbe9cf27262",
"text": "Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex innerand cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large innerand cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process. Source (GTA5) Target (BDD) Figure 1: Exemplar guided image translation examples of GTA5→ BDD. Best viewed in color.",
"title": ""
},
{
"docid": "d62bded822aff38333a212ed1853b53c",
"text": "The design of an activity recognition and monitoring system based on the eWatch, multi-sensor platform worn on different body positions, is presented in this paper. The system identifies the user's activity in realtime using multiple sensors and records the classification results during a day. We compare multiple time domain feature sets and sampling rates, and analyze the tradeoff between recognition accuracy and computational complexity. The classification accuracy on different body positions used for wearing electronic devices was evaluated",
"title": ""
},
{
"docid": "4ba81ce5756f2311dde3fa438f81e527",
"text": "To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions – especially those adding a token – to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience.",
"title": ""
},
{
"docid": "17d7936917bdee4cfdc9e92db262baa7",
"text": "RDF models are widely used in the web of data due to their flexibility and similarity to graph patterns. Because of the growing use of RDFs, their volumes and contents are increasing. Therefore, processing of such massive amount of data on a single machine is not efficient enough, because of the response time and limited hardware resources. A common approach to overcome this limitation is cluster processing and huge datasets could benefit distributed cluster processing on Apache Hadoop. Because of using too much of hard disks, the processing time is usually inadequate. In this paper, we propose a partitiong approach based on Apache Spark for rapid processing of RDF data models. A key feature of Apache Spark is using main memory instead of hard disk, so the speed of data processing in our method is improved. We have evaluated the proposed method by runing SQL queris on RDF data which partitioned on the cluster and demonstrates improved performance.",
"title": ""
},
{
"docid": "070ba5ca0e3ee7993e43af1df8b27f49",
"text": "OBJECTIVE\nThis study aimed to evaluate the reproducibility of a new grading system for lumbar foraminal stenosis.\n\n\nMATERIALS AND METHODS\nFour grades were developed for lumbar foraminal stenosis on the basis of sagittal MRI. Grade 0 refers to the absence of foraminal stenosis; grade 1 refers to mild foraminal stenosis showing perineural fat obliteration in the two opposing directions, vertical or transverse; grade 2 refers to moderate foraminal stenosis showing perineural fat obliteration in the four directions without morphologic change, both vertical and transverse directions; and grade 3 refers to severe foraminal stenosis showing nerve root collapse or morphologic change. A total of 576 foramina in 96 patients were analyzed (from L3-L4 to L5-S1). Two experienced radiologists independently assessed the sagittal MR images. Interobserver agreement between the two radiologists and intraobserver agreement by one reader were analyzed using kappa statistics.\n\n\nRESULTS\nAccording to reader 1, grade 1 foraminal stenosis was found in 33 foramina, grade 2 in six, and grade 3 in seven. According to reader 2, grade 1 foraminal stenosis was found in 32 foramina, grade 2 in six, and grade 3 in eight. Interobserver agreement in the grading of foraminal stenosis between the two readers was found to be nearly perfect (kappa value: right L3-L4, 1.0; left L3-L4, 0.905; right L4-L5, 0.929; left L4-L5, 0.942; right L5-S1, 0.919; and left L5-S1, 0.909). In intraobserver agreement by reader 1, grade 1 foraminal stenosis was found in 34 foramina, grade 2 in eight, and grade 3 in seven. Intraobserver agreement in the grading of foraminal stenosis was also found to be nearly perfect (kappa value: right L3-L4, 0.883; left L3-L4, 1.00; right L4-L5, 0.957; left L4-L5, 0.885; right L5-S1, 0.800; and left L5-S1, 0.905).\n\n\nCONCLUSION\nThe new grading system for foraminal stenosis of the lumbar spine showed nearly perfect interobserver and intraobserver agreement and would be helpful for clinical study and routine practice.",
"title": ""
},
{
"docid": "8405b35a36235ba26444655a3619812d",
"text": "Studying the reason why single-layer molybdenum disulfide (MoS2) appears to fall short of its promising potential in flexible nanoelectronics, we find that the nature of contacts plays a more important role than the semiconductor itself. In order to understand the nature of MoS2/metal contacts, we perform ab initio density functional theory calculations for the geometry, bonding, and electronic structure of the contact region. We find that the most common contact metal (Au) is rather inefficient for electron injection into single-layer MoS2 and propose Ti as a representative example of suitable alternative electrode materials.",
"title": ""
},
{
"docid": "ce7f4ecc426c83445741f679b64af330",
"text": "UC Berkeley, BAIR Lab Visiting Researcher with Sergey Levine June 2018 – Nov 2018 NEC Labs America Research Assistant with P. Vernaza, M. Chandraker May 2017 – Sept 2017 Uber Advanced Technologies Group Research Engineer with V. Ramakrishna, D. Bagnell June 2016 – Sept 2016 University of Tokyo I.I.S. Visiting Researcher with Ryo Yonetani, Yoichi Sato June 2015 – July 2015 National Robotics Engineering Center Research Programmer with Drew Bagnell June 2012 – Jan 2013 Carnegie Mellon University R.I. Research Intern with D. Dey, Drew Bagnell June 2011 – Aug 2011",
"title": ""
},
{
"docid": "0107d7777a01050a75fbe06bde3a397b",
"text": "To review our current knowledge of the pathologic bone metabolism in otosclerosis and to discuss the possibilities of non-surgical, pharmacological intervention. Otosclerosis has been suspected to be associated with defective measles virus infection, local inflammation and consecutive bone deterioration in the human otic capsule. In the early stages of otosclerosis, different pharmacological agents may delay the progression or prevent further deterioration of the disease and consecutive hearing loss. Although effective anti-osteoporotic drugs have become available, the use of sodium fluoride and bisphosphonates in otosclerosis has not yet been successful. Bioflavonoids may relieve tinnitus due to otosclerosis, but there is no data available on long-term application and effects on sensorineural hearing loss. In the initial inflammatory phase, corticosteroids or non-steroidal anti-inflammatory drugs may be effective; however, extended systemic application may lead to serious side effects. Vitamin D administration may have effects on the pathological bone loss, as well as on inflammation. No information has been reported on the use of immunosuppressive drugs. Anti-cytokine targeted biological therapy, however, may be feasible. Indeed, one study on the local administration of infliximab has been reported. Potential targets of future therapy may include osteoprotegerin, RANK ligand, cathepsins and also the Wnt-β-catenin pathway. Finally, anti-measles vaccination may delay the progression of the disease and potentially decrease the number of new cases. In conclusion, stapes surgery remains to be widely accepted treatment of conductive hearing loss due to otosclerosis. Due to lack of solid evidence, the place of pharmacological treatment targeting inflammation and bone metabolism needs to be determined by future studies.",
"title": ""
},
{
"docid": "b3a80316fc98ded7c106018afb5acc0a",
"text": "Adaptive antenna array processing is widely known to provide significant anti-interference capabilities within a Global Navigation Satellite Systems (GNSS) receiver. A main challenge in the quest for such receiver architecture has always been the computational/processing requirements. Even more demanding would be to try and incorporate the flexibility of the Software-Defined Radio (SDR) design philosophy in such an implementation. This paper documents a feasible approach to a real-time SDR implementation of a beam-steered GNSS receiver and validates its performance. This research implements a real-time software receiver on a widely-available x86-based multi-core microprocessor to process four-element antenna array data streams sampled with 16-bit resolution. The software receiver is capable of 12 channels all-in-view Controlled Reception Pattern Antenna (CRPA) array processing capable of rejecting multiple interferers. Single Instruction Multiple Data (SIMD) instructions assembly coding and multithreaded programming, the key to such an implementation to reduce computational complexity, are fully documented within the paper. In conventional antenna array systems, receivers use the geometry of antennas and cable lengths known in advance. The documented CRPA implementation is architected to operate without extensive set-up and pre-calibration and leverages Space-Time Adaptive Processing (STAP) to provide adaptation in both the frequency and space domains. The validation component of the paper demonstrates that the developed software receiver operates in real time with live Global Positioning System (GPS) and Wide Area Augmentation System (WAAS) L1 C/A code signal. Further, interference rejection capabilities of the implementation are also demonstrated using multiple synthetic interferers which are added to the live data stream.",
"title": ""
},
{
"docid": "49f132862ca2c4a07d6233e8101a87ff",
"text": "Genetic data as a category of personal data creates a number of challenges to the traditional understanding of personal data and the rules regarding personal data processing. Although the peculiarities of and heightened risks regarding genetic data processing were recognized long before the data protection reform in the EU, the General Data Protection Regulation (GDPR) seems to pay no regard to this. Furthermore, the GDPR will create more legal grounds for (sensitive) personal data (incl. genetic data) processing whilst restricting data subjects’ means of control over their personal data. One of the reasons for this is that, amongst other aims, the personal data reform served to promote big data business in the EU. The substantive clauses of the GDPR concerning big data, however, do not differentiate between the types of personal data being processed. Hence, like all other categories of personal data, genetic data is subject to the big data clauses of the GDPR as well; thus leading to the question whether the GDPR is creating a pathway for ‘big genetic data’. This paper aims to analyse the implications that the role of the GDPR as a big data enabler bears on genetic data processing and the respective rights of the data",
"title": ""
},
{
"docid": "f32187a3253c9327c26f83826e0b03b8",
"text": "Spatiotemporal forecasting has significant implications in sustainability, transportation and health-care domain. Traffic forecasting is one canonical example of such learning task. This task is challenging due to (1) non-linear temporal dynamics with changing road conditions, (2) complex spatial dependencies on road networks topology and (3) inherent difficulty of long-term time series forecasting. To address these challenges, we propose Graph Convolutional Recurrent Neural Network to incorporate both spatial and temporal dependency in traffic flow. We further integrate the encoder-decoder framework and scheduled sampling to improve long-term forecasting. When evaluated on real-world road network traffic data, our approach can accurately capture spatiotemporal correlations and consistently outperforms state-of-the-art baselines by 12% 15%.",
"title": ""
},
{
"docid": "ee7473e3b283790c400f7616392e4c33",
"text": "Evolutionary computation is emerging as a new engineering computational paradigm, which may significantly change the present structural design practice. For this reason, an extensive study of evolutionary computation in the context of structural design has been conducted in the Information Technology and Engineering School at George Mason University and its results are reported here. First, a general introduction to evolutionary computation is presented and recent developments in this field are briefly described. Next, the field of evolutionary design is introduced and its relevance to structural design is explained. Further, the issue of creativity/novelty is discussed and possible ways of achieving it during a structural design process are suggested. Current research progress in building engineering systems’ representations, one of the key issues in evolutionary design, is subsequently discussed. Next, recent developments in constraint-handling methods in evolutionary optimization are reported. Further, the rapidly growing field of evolutionary multiobjective optimization is presented and briefly described. An emerging subfield of coevolutionary design is subsequently introduced and its current advancements reported. Next, a comprehensive review of the applications of evolutionary computation in structural design is provided and chronologically classified. Finally, a summary of the current research status and a discussion on the most promising paths of future research are also presented.",
"title": ""
},
{
"docid": "58b5de13ad6b59269df595693f05168f",
"text": "Convolutional Neural Network (CNN) has gained attractions in image analytics and speech recognition in recent years. However, employing CNN for classification of graphs remains to be challenging. This paper presents the Ngram graph-block based convolutional neural network model for classification of graphs. Our Ngram deep learning framework consists of three novel components. First, we introduce the concept of <inline-formula><tex-math notation=\"LaTeX\">$n$ </tex-math><alternatives><inline-graphic xlink:href=\"luo-ieq1-2720734.gif\"/></alternatives></inline-formula>-gram block to transform each raw graph object into a sequence of <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math> <alternatives><inline-graphic xlink:href=\"luo-ieq2-2720734.gif\"/></alternatives></inline-formula>-gram blocks connected through overlapping regions. Second, we introduce a diagonal convolution step to extract local patterns and connectivity features hidden in these <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math><alternatives> <inline-graphic xlink:href=\"luo-ieq3-2720734.gif\"/></alternatives></inline-formula>-gram blocks by performing <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math><alternatives> <inline-graphic xlink:href=\"luo-ieq4-2720734.gif\"/></alternatives></inline-formula>-gram normalization. Finally, we develop deeper global patterns based on the local patterns and the ways that they respond to overlapping regions by building a <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math><alternatives> <inline-graphic xlink:href=\"luo-ieq5-2720734.gif\"/></alternatives></inline-formula>-gram deep learning model using convolutional neural network. We evaluate the effectiveness of our approach by comparing it with the existing state of art methods using five real graph repositories from bioinformatics and social networks domains. Our results show that the Ngram approach outperforms existing methods with high accuracy and comparable performance.",
"title": ""
},
{
"docid": "33ae11cfc67a9afe34483444a03bfd5a",
"text": "In today’s interconnected digital world, targeted attacks have become a serious threat to conventional computer systems and critical infrastructure alike. Many researchers contribute to the fight against network intrusions or malicious software by proposing novel detection systems or analysis methods. However, few of these solutions have a particular focus on Advanced Persistent Threats or similarly sophisticated multi-stage attacks. This turns finding domain-appropriate methodologies or developing new approaches into a major research challenge. To overcome these obstacles, we present a structured review of semantics-aware works that have a high potential for contributing to the analysis or detection of targeted attacks. We introduce a detailed literature evaluation schema in addition to a highly granular model for article categorization. Out of 123 identified papers, 60 were found to be relevant in the context of this study. The selected articles are comprehensively reviewed and assessed in accordance to Kitchenham’s guidelines for systematic literature reviews. In conclusion, we combine new insights and the status quo of current research into the concept of an ideal systemic approach capable of semantically processing and evaluating information from different observation points.",
"title": ""
},
{
"docid": "fe09c6160b6d737575a92bac551eef45",
"text": "PURPOSE\nTo define some of the most common characteristics of vascular hyporesponsiveness to catecholamines during septic shock and outline current therapeutic approaches and future perspectives.\n\n\nMETHODS\nSource data were obtained from a PubMed search of the medical literature with the following MeSH terms: Muscle, smooth, vascular/physiopathology; hypotension/etiology; shock/physiopathology; vasodilation/physiology; shock/therapy; vasoconstrictor agents.\n\n\nRESULTS\nNO and peroxynitrite are mainly responsible for vasoplegia and vascular hyporeactivity while COX 2 enzyme is responsible for the increase in PGI2, which also contributes to hyporeactivity. Moreover, K+ATP and BKCa channels are over-activated during septic shock and participate in hypotension. Finally, other mechanisms are involved in vascular hyporesponsiveness such as critical illness-related corticosteroid insufficiency, vasopressin depletion, dysfunction and desensitization of adrenoreceptors as well as inactivation of catecholamines by oxidation.\n\n\nCONCLUSION\nIn animal models, several therapeutic approaches, targeted on one particular compound have proven their efficacy in preventing or reversing vascular hyporesponsiveness to catecholamines. Unfortunately, none have been successfully tested in clinical trials. Nevertheless, very high doses of catecholamines ( > 5 μg/kg/min), hydrocortisone, terlipressin or vasopressin could represent an alternative for the treatment of refractory septic shock.",
"title": ""
}
] | scidocsrr |
f06e068e74c0adee96c1e7ae44770b30 | NBA (network balancing act): a high-performance packet processing framework for heterogeneous processors | [
{
"docid": "b088f6f89facb0139f1e6c299ed2e9a3",
"text": "Scaling the performance of short TCP connections on multicore systems is fundamentally challenging. Although many proposals have attempted to address various shortcomings, inefficiency of the kernel implementation still persists. For example, even state-of-the-art designs spend 70% to 80% of CPU cycles in handling TCP connections in the kernel, leaving only small room for innovation in the user-level program. This work presents mTCP, a high-performance userlevel TCP stack for multicore systems. mTCP addresses the inefficiencies from the ground up—from packet I/O and TCP connection management to the application interface. In addition to adopting well-known techniques, our design (1) translates multiple expensive system calls into a single shared memory reference, (2) allows efficient flowlevel event aggregation, and (3) performs batched packet I/O for high I/O efficiency. Our evaluations on an 8-core machine showed that mTCP improves the performance of small message transactions by a factor of 25 compared to the latest Linux TCP stack and a factor of 3 compared to the best-performing research system known so far. It also improves the performance of various popular applications by 33% to 320% compared to those on the Linux stack.",
"title": ""
}
] | [
{
"docid": "31c48b4aa8402ad6439ec1acd5cbb889",
"text": "Face recognition has been extensively used in a wide variety of security systems for identity authentication for years. However, many security systems are vulnerable to spoofing face attacks (e.g., 2D printed photo, replayed video). Consequently, a number of anti-spoofing approaches have been proposed. In this study, we introduce a new algorithm that addresses the face liveness detection based on the digital focus technique. The proposed algorithm relies on the property of digital focus with various depths of field (DOFs) while shooting. Two features of the blurriness level and the gradient magnitude threshold are computed on the nose and the cheek subimages. The differences of these two features between the nose and the cheek in real face images and spoofing face images are used to facilitate detection. A total of 75 subjects with both real and spoofing face images were used to evaluate the proposed framework. Preliminary experimental results indicated that this new face liveness detection system achieved a high recognition rate of 94.67% and outperformed many state-of-the-art methods. The computation speed of the proposed algorithm was the fastest among the tested methods.",
"title": ""
},
{
"docid": "fae60b86d98a809f876117526106719d",
"text": "Big Data security analysis is commonly used for the analysis of large volume security data from an organisational perspective, requiring powerful IT infrastructure and expensive data analysis tools. Therefore, it can be considered to be inaccessible to the vast majority of desktop users and is difficult to apply to their rapidly growing data sets for security analysis. A number of commercial companies offer a desktop-oriented big data security analysis solution; however, most of them are prohibitive to ordinary desktop users with respect to cost and IT processing power. This paper presents an intuitive and inexpensive big data security analysis approach using Computational Intelligence (CI) techniques for Windows desktop users, where the combination of Windows batch programming, EmEditor and R are used for the security analysis. The simulation is performed on a real dataset with more than 10 million observations, which are collected from Windows Firewall logs to demonstrate how a desktop user can gain insight into their abundant and untouched data and extract useful information to prevent their system from current and future security threats. This CI-based big data security analysis approach can also be extended to other types of security logs such as event logs, application logs and web logs.",
"title": ""
},
{
"docid": "5d827a27d9fb1fe4041e21dde3b8ce44",
"text": "Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.",
"title": ""
},
{
"docid": "3744970293b3ed4c4543e6f2313fe2e4",
"text": "With the proliferation of GPS-enabled smart devices and increased availability of wireless network, spatial crowdsourcing (SC) has been recently proposed as a framework to automatically request workers (i.e., smart device carriers) to perform location-sensitive tasks (e.g., taking scenic photos, reporting events). In this paper we study a destination-aware task assignment problem that concerns the optimal strategy of assigning each task to proper worker such that the total number of completed tasks can be maximized whilst all workers can reach their destinations before deadlines after performing assigned tasks. Finding the global optimal assignment turns out to be an intractable problem since it does not imply optimal assignment for individual worker. Observing that the task assignment dependency only exists amongst subsets of workers, we utilize tree-decomposition technique to separate workers into independent clusters and develop an efficient depth-first search algorithm with progressive bounds to prune non-promising assignments. Our empirical studies demonstrate that our proposed technique is quite effective and settle the problem nicely.",
"title": ""
},
{
"docid": "d97af6f656cba4018a5d367861a07f01",
"text": "Traditional Cloud model is not designed to handle latency-sensitive Internet of Things applications. The new trend consists on moving data to be processed close to where it was generated. To this end, Fog Computing paradigm suggests using the compute and storage power of network elements. In such environments, intelligent and scalable orchestration of thousands of heterogeneous devices in complex environments is critical for IoT Service providers. In this vision paper, we present a framework, called Foggy, that facilitates dynamic resource provisioning and automated application deployment in Fog Computing architectures. We analyze several applications and identify their requirements that need to be taken intoconsideration in our design of the Foggy framework. We implemented a proof of concept of a simple IoT application continuous deployment using Raspberry Pi boards.",
"title": ""
},
{
"docid": "33ee29c4ccab435b8b64058b584e13cd",
"text": "In this paper, we present a music recommendation system, which provides a personalized service of music recommendation. The polyphonic music objects of MIDI format are first analyzed for deriving information for music grouping. For this purpose, the representative track of each polyphonic music object is first determined, and then six features are extracted from this track for proper music grouping. Moreover, the user access histories are analyzed to derive the profiles of user interests and behaviors for user grouping. The content-based, collaborative, and statistics-based recommendation methods are proposed based on the favorite degrees of the users to the music groups, and the user groups they belong to. A series of experiments are carried out to show that our approach performs well.",
"title": ""
},
{
"docid": "af1118d8de62821df883250837def5ad",
"text": "Roller compaction is commonly used in the pharmaceutical industry to improve powder flow and compositional uniformity. The process produces ribbons which are milled into granules. The ribbon solid fraction (SF) can affect both the granule size and the tensile strength of downstream tablets. Roll force, which is directly related to the applied stress on the powder in the nip region, is typically the most dominant process parameter controlling the ribbon solid fraction. This work is an extension of a previous study, leveraging mathematical modeling as part of a Quality by Design development strategy (Powder Technology, 2011, 213: 1–13). In this paper, a semi-empirical unified powder compaction model is postulated describing powder solid fraction evolution as a function of applied stress in three geometries: the tapped cylinder (uniaxial strain—part of a standard tapped density measurement), the roller compaction geometry (plane strain deformation), and tablet compression (uniaxial strain). A historical database (CRAVE) containing data from many different formulations was leveraged to evaluate the model. The internally developed CRAVE database contains all aspects of drug product development batch records and was queried to retrieve tablet compression data along with corresponding roller compaction and tap density measurements for the same batch. Tablet compaction data and tap density data were used to calibrate a quadratic relationship between stress and the reciprocal of porosity. The quadratic relationship was used to predict the roll stress and corresponding roll force required to attain the reported ribbon SF. The predicted roll force was found to be consistent with the actual roll force values recorded across 136 different formulations in 136 batch records. In addition, significant correlations were found between the first and the second order constants of the quadratic relationship, suggesting that a single formulation-dependent fitting parameter may be used to define the complete SF versus stress relationship. The fitting parameter could be established by compressing a single tablet and measuring the powder tapped density. It was concluded that characterization of this parameter at a small scale can help define the required process parameters for both roller compactors and tablet presses at a large scale.",
"title": ""
},
{
"docid": "9dfef5bc76b78e7577b9eb377b830a9e",
"text": "Patients with Parkinson's disease may have difficulties in speaking because of the reduced coordination of the muscles that control breathing, phonation, articulation and prosody. Symptoms that may occur because of changes are weakening of the volume of the voice, voice monotony, changes in the quality of the voice, speed of speech, uncontrolled repetition of words. The evaluation of some of the disorders mentioned can be achieved through measuring the variation of parameters in an objective manner. It may be done to evaluate the response to the treatments with intra-daily frequency pre / post-treatment, as well as in the long term. Software systems allow these measurements also by recording the patient's voice. This allows to carry out a large number of tests by means of a larger number of patients and a higher frequency of the measurements. The main goal of our work was to design and realize Voxtester, an effective and simple to use software system useful to measure whether changes in voice emission are sensitive to pharmacologic treatments. Doctors and speech therapists can easily use it without going into the technical details, and we think that this goal is reached only by Voxtester, up to date.",
"title": ""
},
{
"docid": "1c77370d8a69e83f45ddd314b798f1b1",
"text": "The use of networks for communications between the Electronic Control Units (ECU) of a vehicle in production cars dates from the beginning of the 90s. The speci c requirements of the di erent car domains have led to the development of a large number of automotive networks such as LIN, CAN, CAN FD, FlexRay, MOST, automotive Ethernet AVB, etc.. This report rst introduces the context of in-vehicle embedded systems and, in particular, the requirements imposed on the communication systems. Then, a review of the most widely used, as well as the emerging automotive networks is given. Next, the current e orts of the automotive industry on middleware technologies which may be of great help in mastering the heterogeneity, are reviewed, with a special focus on the proposals of the AUTOSAR consortium. Finally, we highlight future trends in the development of automotive communication systems. ∗This technical report is an updated version of two earlier review papers on automotive networks: N. Navet, Y.-Q. Song, F. Simonot-Lion, C. Wilwert, \"Trends in Automotive Communication Systems\", Proceedings of the IEEE, special issue on Industrial Communications Systems, vol 96, no6, pp1204-1223, June 2005 [66]. An updated version of this IEEE Proceedings then appeared as chapter 4 in The Automotive Embedded Systems Handbook in 2008 [62].",
"title": ""
},
{
"docid": "312751cb91bc62e1db0e137f1b0b6748",
"text": "Advertising, long the financial mainstay of the web ecosystem, has become nearly ubiquitous in the world of mobile apps. While ad targeting on the web is fairly well understood, mobile ad targeting is much less studied. In this paper, we use empirical methods to collect a database of over 225,000 ads on 32 simulated devices hosting one of three distinct user profiles. We then analyze how the ads are targeted by correlating ads to potential targeting profiles using Bayes’ rule and Pearson’s chi squared test. This enables us to measure the prevalence of different forms of targeting. We find that nearly all ads show the effects of applicationand time-based targeting, while we are able to identify location-based targeting in 43% of the ads and user-based targeting in 39%.",
"title": ""
},
{
"docid": "263c7309eb803c91ab15af5708cf039c",
"text": "In wave optics, the Wigner distribution and its Fourier dual, the ambiguity function, are important tools in optical system simulation and analysis. The light field fulfills a similar role in the computer graphics community. In this paper, we establish that the light field as it is used in computer graphics is equivalent to a smoothed Wigner distribution and that these are equivalent to the raw Wigner distribution under a geometric optics approximation. Using this insight, we then explore two recent contributions: Fourier slice photography in computer graphics and wavefront coding in optics, and we examine the similarity between explanations of them using Wigner distributions and explanations of them using light fields. Understanding this long-suspected equivalence may lead to additional insights and the productive exchange of ideas between the two fields.",
"title": ""
},
{
"docid": "f122373d44be16dadd479c75cca34a2a",
"text": "This paper presents the design, fabrication, and evaluation of a novel type of valve that uses an electropermanent magnet [1]. This valve is then used to build actuators for a soft robot. The developed EPM valves require only a brief (5 ms) pulse of current to turn flow on or off for an indefinite period of time. EPMvalves are characterized and demonstrated to be well suited for the control of elastomer fluidic actuators. The valves drive the pressurization and depressurization of fluidic channels within soft actuators. Furthermore, the forward locomotion of a soft, multi-actuator rolling robot is driven by EPM valves. The small size and energy-efficiency of EPM valves may make them valuable in soft mobile robot applications.",
"title": ""
},
{
"docid": "75c1fa342d6f30d68b0aba906a54dd69",
"text": "The Constrained Application Protocol (CoAP) is a promising candidate for future smart city applications that run on resource-constrained devices. However, additional security means are mandatory to cope with the high security requirements of smart city applications. We present a framework to evaluate lightweight intrusion detection techniques for CoAP applications. This framework combines an OMNeT++ simulation with C/C++ application code that also runs on real hardware. As the result of our work, we used our framework to evaluate intrusion detection techniques for a smart public transport application that uses CoAP. Our first evaluations indicate that a hybrid IDS approach is a favorable choice for smart city applications.",
"title": ""
},
{
"docid": "b5fe13becf36cdc699a083b732dc5d6a",
"text": "The stability of two-dimensional, linear, discrete systems is examined using the 2-D matrix Lyapunov equation. While the existence of a positive definite solution pair to the 2-D Lyapunov equation is sufficient for stability, the paper proves that such existence is not necessary for stability, disproving a long-standing conjecture.",
"title": ""
},
{
"docid": "29fa75e49d4179072ec25b8aab6b48e2",
"text": "We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituentand dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.",
"title": ""
},
{
"docid": "497d72ce075f9bbcb2464c9ab20e28de",
"text": "Eukaryotic organisms radiated in Proterozoic oceans with oxygenated surface waters, but, commonly, anoxia at depth. Exceptionally preserved fossils of red algae favor crown group emergence more than 1200 million years ago, but older (up to 1600-1800 million years) microfossils could record stem group eukaryotes. Major eukaryotic diversification ~800 million years ago is documented by the increase in the taxonomic richness of complex, organic-walled microfossils, including simple coenocytic and multicellular forms, as well as widespread tests comparable to those of extant testate amoebae and simple foraminiferans and diverse scales comparable to organic and siliceous scales formed today by protists in several clades. Mid-Neoproterozoic establishment or expansion of eukaryophagy provides a possible mechanism for accelerating eukaryotic diversification long after the origin of the domain. Protists continued to diversify along with animals in the more pervasively oxygenated oceans of the Phanerozoic Eon.",
"title": ""
},
{
"docid": "b7668f382f1857ff034d8088328f866d",
"text": "Diverse lines of evidence point to a basic human aversion to physically harming others. First, we demonstrate that unwillingness to endorse harm in a moral dilemma is predicted by individual differences in aversive reactivity, as indexed by peripheral vasoconstriction. Next, we tested the specific factors that elicit the aversive response to harm. Participants performed actions such as discharging a fake gun into the face of the experimenter, fully informed that the actions were pretend and harmless. These simulated harmful actions increased peripheral vasoconstriction significantly more than did witnessing pretend harmful actions or to performing metabolically matched nonharmful actions. This suggests that the aversion to harmful actions extends beyond empathic concern for victim harm. Together, these studies demonstrate a link between the body and moral decision-making processes.",
"title": ""
},
{
"docid": "f1e646a0627a5c61a0f73a41d35ccac7",
"text": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.",
"title": ""
},
{
"docid": "dc1bd4603d9673fb4cd0fd9d7b0b6952",
"text": "We investigate the contribution of option markets to price discovery, using a modification of Hasbrouck’s (1995) “information share” approach. Based on five years of stock and options data for 60 firms, we estimate the option market’s contribution to price discovery to be about 17 percent on average. Option market price discovery is related to trading volume and spreads in both markets, and stock volatility. Price discovery across option strike prices is related to leverage, trading volume, and spreads. Our results are consistent with theoretical arguments that informed investors trade in both stock and option markets, suggesting an important informational role for options. ∗Chakravarty is from Purdue University; Gulen is from the Pamplin College of Business, Virginia Tech; and Mayhew is from the Terry College of Business, University of Georgia and the U.S. Securities and Exchange Commission. We would like to thank the Institute for Quantitative Research in Finance (the Q-Group) for funding this research. Gulen acknowledges funding from a Virginia Tech summer grant and Mayhew acknowledges funding from the TerrySanford Research Grant at the Terry College of Business and from the University of Georgia Research Foundation. We would like to thank the editor, Rick Green; Michael Cliff; Joel Hasbrouck; Raman Kumar; an anonymous referee; and seminar participants at Purdue University, the University of Georgia, Texas Christian University, the University of South Carolina, the Securities and Exchange Commission, the University of Delaware, George Washington University, the Commodity Futures Trading Commission, the Batten Conference at the College of William and Mary, the 2002 Q-Group Conference, and the 2003 INQUIRE conference. The U.S. Securities and Exchange Commission disclaims responsibility for any private publication or statement of any SEC employee or Commissioner. This study expresses the author’s views and does not necessarily reflect those of the Commission, the Commissioners, or other members of the staff.",
"title": ""
},
{
"docid": "0d93bf1b3b891a625daa987652ca1964",
"text": "In this paper, we show that a continuous spectrum of randomis ation exists, in which most existing tree randomisations are only operating around the tw o ends of the spectrum. That leaves a huge part of the spectrum largely unexplored. We propose a ba se le rner VR-Tree which generates trees with variable-randomness. VR-Trees are able to span f rom the conventional deterministic trees to the complete-random trees using a probabilistic pa rameter. Using VR-Trees as the base models, we explore the entire spectrum of randomised ensemb les, together with Bagging and Random Subspace. We discover that the two halves of the spectrum have their distinct characteristics; and the understanding of which allows us to propose a new appr o ch in building better decision tree ensembles. We name this approach Coalescence, which co ales es a number of points in the random-half of the spectrum. Coalescence acts as a committe e of “ xperts” to cater for unforeseeable conditions presented in training data. Coalescence is found to perform better than any single operating point in the spectrum, without the need to tune to a specific level of randomness. In our empirical study, Coalescence ranks top among the benchm arking ensemble methods including Random Forests, Random Subspace and C5 Boosting; and only Co alescence is significantly better than Bagging and Max-Diverse Ensemble among all the methods in the comparison. Although Coalescence is not significantly better than Random Forests , we have identified conditions under which one will perform better than the other.",
"title": ""
}
] | scidocsrr |
c76246a3a23e9bed92c92fd984bd2c88 | Race directed random testing of concurrent programs | [
{
"docid": "cb1952a4931955856c6479d7054c57e7",
"text": "This paper presents a static race detection analysis for multithreaded Java programs. Our analysis is based on a formal type system that is capable of capturing many common synchronization patterns. These patterns include classes with internal synchronization, classes thatrequire client-side synchronization, and thread-local classes. Experience checking over 40,000 lines of Java code with the type system demonstrates that it is an effective approach for eliminating races conditions. On large examples, fewer than 20 additional type annotations per 1000 lines of code were required by the type checker, and we found a number of races in the standard Java libraries and other test programs.",
"title": ""
}
] | [
{
"docid": "b7d585ffa334a5c0f88575e42a8682c4",
"text": "Detecting impending failure of hard disks is an important prediction task which might help computer systems to prevent loss of data and performance degradation. Currently most of the hard drive vendors support self-monitoring, analysis and reporting technology (SMART) which are often considered unreliable for such tasks. The problem of finding alternatives to SMART for predicting disk failure is an area of active research. In this paper, we consider events recorded from live disks and show that it is possible to construct decision support systems which can detect such failures. It is desired that any such prediction methodology should have high accuracy and ease of interpretability. Black box models can deliver highly accurate solutions but do not provide an understanding of events which explains the decision given by it. To this end we explore rule based classifiers for predicting hard disk failures from various disk events. We show that it is possible to learn easy to understand rules, from disk events, which have extremely low false alarm rates on real world data.",
"title": ""
},
{
"docid": "ce3d82fc815a965a66be18d20434e80f",
"text": "In this paper the three-phase grid connected inverter has been investigated. The inverter’s control strategy is based on the adaptive hysteresis current controller. Inverter connects the DG (distributed generation) source to the grid. The main advantages of this method are constant switching frequency, better current control, easy filter design and less THD (total harmonic distortion). Since a constant and ripple free dc bus voltage is not ensured at the output of alternate energy sources, the main aim of the proposed algorithm is to make the output of the inverter immune to the fluctuations in the dc input voltage This inverter can be used to connect the medium and small-scale wind turbines and solar cells to the grid and compensate local load reactive power. Reactive power compensating improves SUF (system usage factor) from nearly 20% (in photovoltaic systems) to 100%. The simulation results confirm that switching frequency is constant and THD of injected current is low.",
"title": ""
},
{
"docid": "f0958d2c952c7140c998fa13a2bf4374",
"text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.",
"title": ""
},
{
"docid": "1167ef2f839531bcaca3fae3cd25cf55",
"text": "Finger impairment following stroke results in significant deficits in hand manipulation and the performance of everyday tasks. While recent advances in rehabilitation robotics have shown promise for facilitating functional improvement, it remains unclear how best to employ these devices to maximize benefits. Current devices for the hand, however, lack the capacity to fully explore the space of possible training paradigms. Particularly, they cannot provide the independent joint control and levels of velocity and torque required. To fill this need, we have developed a prototype for one digit, the cable actuated finger exoskeleton (CAFE), a three-degree-of-freedom robotic exoskeleton for the index finger. This paper presents the design and development of the CAFE, with performance testing results.",
"title": ""
},
{
"docid": "a71d142df039c6a361e60ec1342a3980",
"text": "Intelligent transportation systems (ITS) rely on connected vehicle applications to address real-world problems. Research is currently being conducted to support safety, mobility and environmental applications. This paper presents the DrivingStyles architecture, which adopts data mining techniques and neural networks to analyze and generate a classification of driving styles and fuel consumption based on driver characterization. In particular, we have implemented an algorithm that is able to characterize the degree of aggressiveness of each driver. We have also developed a methodology to calculate, in real-time, the consumption and environmental impact of spark ignition and diesel vehicles from a set of variables obtained from the vehicle’s electronic control unit (ECU). In this paper, we demonstrate the impact of the driving style on fuel consumption, as well as its correlation with the greenhouse gas emissions generated by each vehicle. Overall, our platform is able to assist drivers in correcting their bad driving habits, while offering helpful tips to improve fuel economy and driving safety.",
"title": ""
},
{
"docid": "276ccf2a41d91739f5d0bd884abdedbd",
"text": "Evidence of the effects of playing violent video games on subsequent aggression has been mixed. This study examined how playing a violent video game affected levels of aggression displayed in a laboratory. A total of 43 undergraduate students (22 men and 21 women) were randomly assigned to play either a violent (Mortal Kombat) or nonviolent (PGA Tournament Golf) video game for 10 min. Then they competed with a confederate in a reaction time task that allowed for provocation and retaliation. Punishment levels set by participants for their opponents served as the measure of aggression. The results confirmed our hypothesis that playing the violent game would result in more aggression than would playing the nonviolent game. In addition, a Game Sex interaction showed that ssed",
"title": ""
},
{
"docid": "4d18ea8816e9e4abf428b3f413c82f9e",
"text": "This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.",
"title": ""
},
{
"docid": "6f2162f883fce56eaa6bd8d0fbcedc0b",
"text": "While data from Massive Open Online Courses (MOOCs) offers the potential to gain new insights into the ways in which online communities can contribute to student learning, much of the richness of the data trace is still yet to be mined. In particular, very little work has attempted fine-grained content analyses of the student interactions in MOOCs. Survey research indicates the importance of student goals and intentions in keeping them involved in a MOOC over time. Automated fine-grained content analyses offer the potential to detect and monitor evidence of student engagement and how it relates to other aspects of their behavior. Ultimately these indicators reflect their commitment to remaining in the course. As a methodological contribution, in this paper we investigate using computational linguistic models to measure learner motivation and cognitive engagement from the text of forum posts. We validate our techniques using survival models that evaluate the predictive validity of these variables in connection with attrition over time. We conduct this evaluation in three MOOCs focusing on very different types of learning materials. Prior work demonstrates that participation in the discussion forums at all is a strong indicator of student commitment. Our methodology allows us to differentiate better among these students, and to identify danger signs that a struggling student is in need of support within a population whose interaction with the course offers the opportunity for effective support to be administered. Theoretical and practical implications will be discussed.",
"title": ""
},
{
"docid": "c6a23113b0e88c884eaddfba9cce2667",
"text": "Recent research in machine learning has focused on breaking audio spectrograms into separate sources of sound using latent variable decompositions. These methods require that the number of sources be specified in advance, which is not always possible. To address this problem, we develop Gamma Process Nonnegative Matrix Factorization (GaP-NMF), a Bayesian nonparametric approach to decomposing spectrograms. The assumptions behind GaP-NMF are based on research in signal processing regarding the expected distributions of spectrogram data, and GaP-NMF automatically discovers the number of latent sources. We derive a mean-field variational inference algorithm and evaluate GaP-NMF on both synthetic data and recorded music.",
"title": ""
},
{
"docid": "27775805c45a82cbd31fd9a5e93f3df1",
"text": "In a dynamic world, mechanisms allowing prediction of future situations can provide a selective advantage. We suggest that memory systems differ in the degree of flexibility they offer for anticipatory behavior and put forward a corresponding taxonomy of prospection. The adaptive advantage of any memory system can only lie in what it contributes for future survival. The most flexible is episodic memory, which we suggest is part of a more general faculty of mental time travel that allows us not only to go back in time, but also to foresee, plan, and shape virtually any specific future event. We review comparative studies and find that, in spite of increased research in the area, there is as yet no convincing evidence for mental time travel in nonhuman animals. We submit that mental time travel is not an encapsulated cognitive system, but instead comprises several subsidiary mechanisms. A theater metaphor serves as an analogy for the kind of mechanisms required for effective mental time travel. We propose that future research should consider these mechanisms in addition to direct evidence of future-directed action. We maintain that the emergence of mental time travel in evolution was a crucial step towards our current success.",
"title": ""
},
{
"docid": "e3ca898c936009e149d5639a6e72359e",
"text": "Tracking bits through block ciphers and optimizing attacks at hand is one of the tedious task symmetric cryptanalysts have to deal with. It would be nice if a program will automatically handle them at least for well-known attack techniques, so that cryptanalysts will only focus on nding new attacks. However, current automatic tools cannot be used as is, either because they are tailored for speci c ciphers or because they only recover a speci c part of the attacks and cryptographers are still needed to nalize the analysis. In this paper we describe a generic algorithm exhausting the best meetin-the-middle and impossible di erential attacks on a very large class of block ciphers from byte to bit-oriented, SPN, Feistel and Lai-Massey block ciphers. Contrary to previous tools that target to nd the best di erential / linear paths in the cipher and leave the cryptanalysts to nd the attack using these paths, we automatically nd the best attacks by considering the cipher and the key schedule algorithms. The building blocks of our algorithm led to two algorithms designed to nd the best simple meet-in-the-middle attacks and the best impossible truncated differential attacks respectively. We recover and improve many attacks on AES, mCRYPTON, SIMON, IDEA, KTANTAN, PRINCE and ZORRO. We show that this tool can be used by designers to improve their analysis.",
"title": ""
},
{
"docid": "fe4c9336db84d7303280b87485f4262f",
"text": "The mechanistic target of rapamycin (mTOR) coordinates eukaryotic cell growth and metabolism with environmental inputs, including nutrients and growth factors. Extensive research over the past two decades has established a central role for mTOR in regulating many fundamental cell processes, from protein synthesis to autophagy, and deregulated mTOR signaling is implicated in the progression of cancer and diabetes, as well as the aging process. Here, we review recent advances in our understanding of mTOR function, regulation, and importance in mammalian physiology. We also highlight how the mTOR signaling network contributes to human disease and discuss the current and future prospects for therapeutically targeting mTOR in the clinic.",
"title": ""
},
{
"docid": "1e5956b0d9d053cd20aad8b53730c969",
"text": "The cloud is migrating to the edge of the network, where routers themselves may become the virtualisation infrastructure, in an evolution labelled as \"the fog\". However, many other complementary technologies are reaching a high level of maturity. Their interplay may dramatically shift the information and communication technology landscape in the following years, bringing separate technologies into a common ground. This paper offers a comprehensive definition of the fog, comprehending technologies as diverse as cloud, sensor networks, peer-to-peer networks, network virtualisation functions or configuration management techniques. We highlight the main challenges faced by this potentially breakthrough technology amalgamation.",
"title": ""
},
{
"docid": "56beaf6067d944fc17fe282155c303a0",
"text": "The femur is the enlarged and the vigorous bone in the human body, ranging from the hip to the knee. This bone is responsible for the creation of Red Blood Cell in the body. Since this bone is a major part of the body, a method is proposed through this paper to visualize and classify deformities for locating fractures in the femur through image processing techniques. The input image is preprocessed to highlight the domain of interest. In the process, the foreground which is the major domain of interest is figured out by suppressing the background details. The mathematical morphological techniques are used for these operations. With the help of basic morphological operations, the foreground is highlighted and edge detection is used to highlight the objects in the foreground. The processed image is classified using the support vector machine (SVM) to distinguish fractured and unfractured sides of the bone.",
"title": ""
},
{
"docid": "93363db7856de156d314adb747db5c63",
"text": "This paper presents a detailed analysis about the power losses and efficiency of multilevel dc-dc converters. The analysis considers different loss mechanisms and gives out quantitative descriptions of the power losses and useful design criteria. The analysis is based on a three-level multilevel dc-dc converter and can be extended to other switched-capacitor converters. The comparison between the theoretical analysis and the experimental results are shown to substantiate the theory",
"title": ""
},
{
"docid": "e3f392ea43d435e08dc8996902fb6349",
"text": "In nanopore sequencing devices, electrolytic current signals are sensitive to base modifications, such as 5-methylcytosine (5-mC). Here we quantified the strength of this effect for the Oxford Nanopore Technologies MinION sequencer. By using synthetically methylated DNA, we were able to train a hidden Markov model to distinguish 5-mC from unmethylated cytosine. We applied our method to sequence the methylome of human DNA, without requiring special steps for library preparation.",
"title": ""
},
{
"docid": "5f414e1f03aa2a9c54fc98b05ca65cdb",
"text": "Power MOSFETs have become the standard choice as the main switching device for low-voltage (<200 V) switchmode power-supply (SMPS) converter applications. However using manufacturers’ datasheets to choose or size the correct device for a specific circuit topology is becoming increasingly difficult. The main criteria for MOSFET selection are the power loss associated with the MOSFET (related to the overall efficiency of the SMPS) and the power-dissipation capability of the MOSFET (related to the maximum junction temperature and thermal performance of the package). This application note focuses on the basic characteristics and understanding of the MOSFET.",
"title": ""
},
{
"docid": "160a866ca769a847138c5afc7f34db38",
"text": "STUDY OBJECTIVE\nThe purpose of this article is to review the published literature and perform a systematic review to evaluate the effectiveness and feasibility of the use of a hysteroscope for vaginoscopy or hysteroscopy in diagnosing and establishing therapeutic management of adolescent patients with gynecologic problems.\n\n\nDESIGN\nA systematic review.\n\n\nSETTING\nPubMed, Web of science, and Scopus searches were performed for the period up to September 2013 to identify all the eligible studies. Additional relevant articles were identified using citations within these publications.\n\n\nPARTICIPANTS\nFemale adolescents aged 10 to 18 years.\n\n\nRESULTS\nA total of 19 studies were included in the systematic review. We identified 19 case reports that described the application of a hysteroscope as treatment modality for some gynecologic conditions or diseases in adolescents. No original study was found matching the age of this specific population.\n\n\nCONCLUSIONS\nA hysteroscope is a useful substitute for vaginoscopy or hysteroscopy for the exploration of the immature genital tract and may help in the diagnosis and treatment of gynecologic disorders in adolescent patients with an intact hymen, limited vaginal access, or a narrow vagina.",
"title": ""
},
{
"docid": "570eca9884edb7e4a03ed95763be20aa",
"text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.",
"title": ""
},
{
"docid": "5cc666e8390b0d3cefaee2d55ad7ee38",
"text": "The thermal environment surrounding preterm neonates in closed incubators is regulated via air temperature control mode. At present, these control modes do not take account of all the thermal parameters involved in a pattern of incubator such as the thermal parameters of preterm neonates (birth weight < 1000 grams). The objective of this work is to design and validate a generalized predictive control (GPC) that takes into account the closed incubator model as well as the newborn premature model. Then, we implemented this control law on a DRAGER neonatal incubator with and without newborn using microcontroller card. Methods: The design of the predictive control law is based on a prediction model. The developed model allows us to take into account all the thermal exchanges (radioactive, conductive, convective and evaporative) and the various interactions between the environment of the incubator and the premature newborn. Results: The predictive control law and the simulation model developed in Matlab/Simulink environment make it possible to evaluate the quality of the mode of control of the air temperature to which newborn must be raised. The results of the simulation and implementation of the air temperature inside the incubator (with newborn and without newborn) prove the feasibility and effectiveness of the proposed GPC controller compared with a proportional–integral–derivative controller (PID controller). Keywords—Incubator; neonatal; model; temperature; Arduino; GPC",
"title": ""
}
] | scidocsrr |
c6800000b91876cb175b1475a62c6584 | A Production Oriented Approach for Vandalism Detection in Wikidata - The Buffaloberry Vandalism Detector at WSDM Cup 2017 | [
{
"docid": "40da1f85f7bdc84537a608ce6bec0e17",
"text": "This paper reports on the PAN 2014 evaluation lab which hosts three shared tasks on plagiarism detection, author identification, and author profiling. To improve the reproducibility of shared tasks in general, and PAN’s tasks in particular, the Webis group developed a new web service called TIRA, which facilitates software submissions. Unlike many other labs, PAN asks participants to submit running softwares instead of their run output. To deal with the organizational overhead involved in handling software submissions, the TIRA experimentation platform helps to significantly reduce the workload for both participants and organizers, whereas the submitted softwares are kept in a running state. This year, we addressed the matter of responsibility of successful execution of submitted softwares in order to put participants back in charge of executing their software at our site. In sum, 57 softwares have been submitted to our lab; together with the 58 software submissions of last year, this forms the largest collection of softwares for our three tasks to date, all of which are readily available for further analysis. The report concludes with a brief summary of each task.",
"title": ""
}
] | [
{
"docid": "54c2914107ae5df0a825323211138eb9",
"text": "An implicit, but pervasive view in the information science community is that people are perpetual seekers after new public information, incessantly identifying and consuming new information by browsing the Web and accessing public collections. One aim of this review is to move beyond this consumer characterization, which regards information as a public resource containing novel data that we seek out, consume, and then discard. Instead, I want to focus on a very different view: where familiar information is used as a personal resource that we keep, manage, and (sometimes repeatedly) exploit. I call this information curation. I first summarize limitations of the consumer perspective. I then review research on three different information curation processes: keeping, management, and exploitation. I describe existing work detailing how each of these processes is applied to different types of personal data: documents, e-mail messages, photos, and Web pages. The research indicates people tend to keep too much information, with the exception of contacts and Web pages. When managing information, strategies that rely on piles as opposed to files provide surprising benefits. And in spite of the emergence of desktop search, exploitation currently remains reliant on manual methods such as navigation. Several new technologies have the potential to address important",
"title": ""
},
{
"docid": "88602ba9bcb297af04e58ed478664ee5",
"text": "Effective and accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attention recently. So far, multiple biomarkers have been shown to be sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for the diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers.",
"title": ""
},
{
"docid": "59101ef7f0d3fe1976c4abd364400bc5",
"text": "Although conventional Yagi antenna has the advantage of unidirectional radiation patterns, it is not suitable for wideband applications due to its drawback of narrow bandwidth. In this communication, a compact wideband planar printed quasi-Yagi antenna is presented. The proposed quasi-Yagi antenna consists of a microstrip line to slotline transition structure, a driver dipole, and a parasitic strip element. The driver dipole is connected to the slotline through a coplanar stripline (CPS). The proposed antenna uses a stepped connection structure between the CPS and the slotline to improve the impedance matching. Two apertures are symmetrically etched in the ground plane to improve the unidirectional radiation characteristics. Simulation and experimental results show that the unidirectional radiation patterns of the proposed antenna are good. A 92.2% measured bandwidth with from 3.8 to 10.3 GHz is achieved. A moderate gain, which is better than 4 dBi, is also obtained.",
"title": ""
},
{
"docid": "222ab6804b3fe15fe23b27bc7f5ede5f",
"text": "Single-image super-resolution (SR) reconstruction via sparse representation has recently attracted broad interest. It is known that a low-resolution (LR) image is susceptible to noise or blur due to the degradation of the observed image, which would lead to a poor SR performance. In this paper, we propose a novel robust edge-preserving smoothing SR (REPS-SR) method in the framework of sparse representation. An EPS regularization term is designed based on gradient-domain-guided filtering to preserve image edges and reduce noise in the reconstructed image. Furthermore, a smoothing-aware factor adaptively determined by the estimation of the noise level of LR images without manual interference is presented to obtain an optimal balance between the data fidelity term and the proposed EPS regularization term. An iterative shrinkage algorithm is used to obtain the SR image results for LR images. The proposed adaptive smoothing-aware scheme makes our method robust to different levels of noise. Experimental results indicate that the proposed method can preserve image edges and reduce noise and outperforms the current state-of-the-art methods for noisy images.",
"title": ""
},
{
"docid": "0b1baa3190abb39284f33b8e73bcad1d",
"text": "Despite significant advances in machine learning and perception over the past few decades, perception algorithms can still be unreliable when deployed in challenging time-varying environments. When these systems are used for autonomous decision-making, such as in self-driving vehicles, the impact of their mistakes can be catastrophic. As such, it is important to characterize the performance of the system and predict when and where it may fail in order to take appropriate action. While similar in spirit to the idea of introspection, this work introduces a new paradigm for predicting the likely performance of a robot’s perception system based on past experience in the same workspace. In particular, we propose two models that probabilistically predict perception performance from observations gathered over time. While both approaches are place-specific, the second approach additionally considers appearance similarity when incorporating past observations. We evaluate our method in a classical decision-making scenario in which the robot must choose when and where to drive autonomously in 60 km of driving data from an urban environment. Results demonstrate that both approaches lead to fewer false decisions (in terms of incorrectly offering or denying autonomy) for two different detector models, and show that leveraging visual appearance within a state-of-the-art navigation framework increases the accuracy of our performance predictions.",
"title": ""
},
{
"docid": "ae7405600f7cf3c7654cc2db73a22340",
"text": "The usual approach for automatic summarization is sentence extraction, where key sentences from the input documents are selected based on a suite of features. While word frequency often is used as a feature in summarization, its impact on system performance has not been isolated. In this paper, we study the contribution to summarization of three factors related to frequency: content word frequency, composition functions for estimating sentence importance from word frequency, and adjustment of frequency weights based on context. We carry out our analysis using datasets from the Document Understanding Conferences, studying not only the impact of these features on automatic summarizers, but also their role in human summarization. Our research shows that a frequency based summarizer can achieve performance comparable to that of state-of-the-art systems, but only with a good composition function; context sensitivity improves performance and significantly reduces repetition.",
"title": ""
},
{
"docid": "35625f248c81ebb5c20151147483f3f6",
"text": "A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.",
"title": ""
},
{
"docid": "e1b6de27518c1c17965a891a8d14a1e1",
"text": "Mobile phones are becoming more and more widely used nowadays, and people do not use the phone only for communication: there is a wide variety of phone applications allowing users to select those that fit their needs. Aggregated over time, application usage patterns exhibit not only what people are consistently interested in but also the way in which they use their phones, and can help improving phone design and personalized services. This work aims at mining automatically usage patterns from apps data recorded continuously with smartphones. A new probabilistic framework for mining usage patterns is proposed. Our methodology involves the design of a bag-of-apps model that robustly represents level of phone usage over specific times of the day, and the use of a probabilistic topic model that jointly discovers patterns of usage over multiple applications and describes users as mixtures of such patterns. Our framework is evaluated using 230 000+ hours of real-life app phone log data, demonstrates that relevant patterns of usage can be extracted, and is objectively validated on a user retrieval task with competitive performance.",
"title": ""
},
{
"docid": "d417b73715337b661c940b370a96fc7b",
"text": "In this paper we introduce a new decentralized digital currency, called NRGcoin. Prosumers in the smart grid trade locally produced renewable energy using NRGcoins, the value of which is determined on an open currency exchange market. Similar to Bitcoins, this currency offers numerous advantages over fiat currency, but unlike Bitcoins it is generated by injecting energy into the grid, rather than spending energy on computational power. In addition, we propose a novel trading paradigm for buying and selling green energy in the smart grid. Our mechanism achieves demand response by providing incentives to prosumers to balance their production and consumption out of their own self-interest. We study the advantages of our proposed currency over traditional money and environmental instruments, and explore its benefits for all parties in the smart grid.",
"title": ""
},
{
"docid": "fd91f09861da433d27d4db3f7d2a38a6",
"text": "Herbert Simon’s research endeavor aimed to understand the processes that participate in human decision making. However, despite his effort to investigate this question, his work did not have the impact in the “decision making” community that it had in other fields. His rejection of the assumption of perfect rationality, made in mainstream economics, led him to develop the concept of bounded rationality. Simon’s approach also emphasized the limitations of the cognitive system, the change of processes due to expertise, and the direct empirical study of cognitive processes involved in decision making. In this article, we argue that his subsequent research program in problem solving and expertise offered critical tools for studying decision-making processes that took into account his original notion of bounded rationality. Unfortunately, these tools were ignored by the main research paradigms in decision making, such as Tversky and Kahneman’s biased rationality approach (also known as the heuristics and biases approach) and the ecological approach advanced by Gigerenzer and others. We make a proposal of how to integrate Simon’s approach with the main current approaches to decision making. We argue that this would lead to better models of decision making that are more generalizable, have higher ecological validity, include specification of cognitive processes, and provide a better understanding of the interaction between the characteristics of the cognitive system and the contingencies of the environment.",
"title": ""
},
{
"docid": "3d3589a002f8195bb20324dd8a8f5d76",
"text": "Vacuum-based end effectors are widely used in industry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an external gravity wrench. To characterize grasps, we estimate robustness to perturbations in end-effector and object pose, material properties, and external wrenches. We analyze grasps across 1,500 3D object models to generate Dex-Net 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels. We use Dex-Net 3.0 to train a Grasp Quality Convolutional Neural Network (GQ-CNN) to classify robust suction targets in point clouds containing a single object. We evaluate the resulting system in 350 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When evaluated on novel objects that we categorize as Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few available suction-grasp points) Dex-Net 3.0 achieves success rates of 98%, 82%, and 58% respectively, improving to 81% in the latter case when the training set includes only adversarial objects. Code, datasets, and supplemental material can be found at http://berkeleyautomation.github.io/dex-net.",
"title": ""
},
{
"docid": "9d319b7bfdf43b05aa79f67c990ccb73",
"text": "Queries are the foundations of data intensive applications. In model-driven software engineering (MDE), model queries are core technologies of tools and transformations. As software models are rapidly increasing in size and complexity, traditional tools exhibit scalability issues that decrease productivity and increase costs [17]. While scalability is a hot topic in the database community and recent NoSQL efforts have partially addressed many shortcomings, this happened at the cost of sacrificing the ad-hoc query capabilities of SQL. Unfortunately, this is a critical problem for MDE applications due to their inherent workload complexity. In this paper, we aim to address both the scalability and ad-hoc querying challenges by adapting incremental graph search techniques – known from the EMF-IncQuery framework – to a distributed cloud infrastructure. We propose a novel architecture for distributed and incremental queries, and conduct experiments to demonstrate that IncQuery-D, our prototype system, can scale up from a single workstation to a cluster that can handle very large models and complex incremental queries efficiently.",
"title": ""
},
{
"docid": "1364388181335859cabcdcecf73038e8",
"text": "In this paper, we propose an image completion algorithm based on dense correspondence between the input image and an exemplar image retrieved from the Internet. Contrary to traditional methods which register two images according to sparse correspondence, in this paper, we propose a hierarchical PatchMatch method that progressively estimates a dense correspondence, which is able to capture small deformations between images. The estimated dense correspondence has usually large occlusion areas that correspond to the regions to be completed. A nearest neighbor field (NNF) interpolation algorithm interpolates a smooth and accurate NNF over the occluded region. Given the calculated NNF, the correct image content from the exemplar image is transferred to the input image. Finally, as there could be a color difference between the completed content and the input image, a color correction algorithm is applied to remove the visual artifacts. Numerical results show that our proposed image completion method can achieve photo realistic image completion results.",
"title": ""
},
{
"docid": "a0124ccd8586bd082ef4510389269d5d",
"text": "We present a convolutional-neural-network-based system that faithfully colorizes black and white photographic images without direct human assistance. We explore various network architectures, objectives, color spaces, and problem formulations. The final classification-based model we build generates colorized images that are significantly more aesthetically-pleasing than those created by the baseline regression-based model, demonstrating the viability of our methodology and revealing promising avenues for future work.",
"title": ""
},
{
"docid": "c20b774b1e2422cadaf41e60652f7363",
"text": "In some situations, utilities may try to “save” the fuse of a circuit following temporary faults by de-energizing the line with the fast operation of an upstream recloser before the fuse is damaged. This fuse-saving practice is accomplished through proper time coordination between a recloser and a fuse. However, the installation of distributed generation (DG) into distribution networks may affect this coordination due to additional fault current contributions from the distributed resources. This phenomenon of recloser-fuse miscoordination is investigated in this paper with the help of a typical network that employs fuse saving. The limitations of a recloser equipped with time and instantaneous overcurrent elements with respect to fuse savings, in the presence of DG, are discussed. An adaptive relaying strategy is proposed to ensure fuse savings in the new scenario even in the worst fault conditions. The simulation results obtained by adaptively changing relay settings in response to changing DG configurations confirm that the settings selected theoretically in accordance with the proposed strategy hold well in operation.",
"title": ""
},
{
"docid": "83e53a09792e434db2bb5bef32c7bf61",
"text": "Extractive document summarization aims to conclude given documents by extracting some salient sentences. Often, it faces two challenges: 1) how to model the information redundancy among candidate sentences; 2) how to select the most appropriate sentences. This paper attempts to build a strong summarizer DivSelect+CNNLM by presenting new algorithms to optimize each of them. Concretely, it proposes CNNLM, a novel neural network language model (NNLM) based on convolutional neural network (CNN), to project sentences into dense distributed representations, then models sentence redundancy by cosine similarity. Afterwards, it formulates the selection process as an optimization problem, constructing a diversified selection process (DivSelect) with the aim of selecting some sentences which have high prestige, meantime, are dis-similar with each other. Experimental results on DUC2002 and DUC2004 benchmark data sets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "96669cea810d2918f2d35875f87d45f2",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
},
{
"docid": "42fa2e99d0c17cf706e6674dafb898a7",
"text": "To improve software productivity, when constructing new software systems, developers often reuse existing class libraries or frameworks by invoking their APIs. Those APIs, however, are often complex and not well documented, posing barriers for developers to use them in new client code. To get familiar with how those APIs are used, developers may search the Web using a general search engine to find relevant documents or code examples. Developers can also use a source code search engine to search open source repositories for source files that use the same APIs. Nevertheless, the number of returned source files is often large. It is difficult for developers to learn API usages from a large number of returned results. In order to help developers understand API usages and write API client code more effectively, we have developed an API usage mining framework and its supporting tool called MAPO (for <u>M</u>ining <u>AP</u>I usages from <u>O</u>pen source repositories). Given a query that describes a method, class, or package for an API, MAPO leverages the existing source code search engines to gather relevant source files and conducts data mining. The mining leads to a short list of frequent API usages for developers to inspect. MAPO currently consists of five components: a code search engine, a source code analyzer, a sequence preprocessor, a frequent sequence miner, and a frequent sequence post processor. We have examined the effectiveness of MAPO using a set of various queries. The preliminary results show that the framework is practical for providing informative and succinct API usage patterns.",
"title": ""
},
{
"docid": "188e971e34192af93c36127b69d89064",
"text": "1 1 This paper has been revised and extended from the authors' previous work [23][24][25]. ABSTRACT Ontology mapping seeks to find semantic correspondences between similar elements of different ontologies. It is a key challenge to achieve semantic interoperability in building the Semantic Web. This paper proposes a new generic and adaptive ontology mapping approach, called the PRIOR+, based on propagation theory, information retrieval techniques and artificial intelligence. The approach consists of three major modules, i.e., the IR-based similarity generator, the adaptive similarity filter and weighted similarity aggregator, and the neural network based constraint satisfaction solver. The approach first measures both linguistic and structural similarity of ontologies in a vector space model, and then aggregates them using an adaptive method based on their harmonies, which is defined as an estimator of performance of similarity. Finally to improve mapping accuracy the interactive activation and competition neural network is activated, if necessary, to search for a solution that can satisfy ontology constraints. The experimental results show that harmony is a good estimator of f-measure; the harmony based adaptive aggregation outperforms other aggregation methods; neural network approach significantly boosts the performance in most cases. Our approach is competitive with top ranked systems on benchmark tests at OAEI campaign 2007, and performs the best on real cases in OAEI benchmark tests.",
"title": ""
},
{
"docid": "e41e5221116a7b63c2238fc4541c1d4c",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii CHAPTER",
"title": ""
}
] | scidocsrr |
290b378cdbef5fbc22e940d194cb0784 | Superionic glass-ceramic electrolytes for room-temperature rechargeable sodium batteries. | [
{
"docid": "ea96aa3b9f162c69c738be2b190db9e0",
"text": "Batteries are currently being developed to power an increasingly diverse range of applications, from cars to microchips. How can scientists achieve the performance that each application demands? How will batteries be able to power the many other portable devices that will no doubt be developed in the coming years? And how can batteries become a sustainable technology for the future? The technological revolution of the past few centuries has been fuelled mainly by variations of the combustion reaction, the fire that marked the dawn of humanity. But this has come at a price: the resulting emissions of carbon dioxide have driven global climate change. For the sake of future generations, we urgently need to reconsider how we use energy in everything from barbecues to jet aeroplanes and power stations. If a new energy economy is to emerge, it must be based on a cheap and sustainable energy supply. One of the most flagrantly wasteful activities is travel, and here battery devices can potentially provide a solution, especially as they can be used to store energy from sustainable sources such as the wind and solar power. Because batteries are inherently simple in concept, it is surprising that their development has progressed much more slowly than other areas of electronics. As a result, they are often seen as being the heaviest, costliest and least-green components of any electronic device. It was the lack of good batteries that slowed down the deployment of electric cars and wireless communication, which date from at least 1899 and 1920, respectively (Fig. 1). The slow progress is due to the lack of suitable electrode materials and electrolytes, together with difficulties in mastering the interfaces between them. All batteries are composed of two electrodes connected by an ionically conductive material called an electrolyte. The two electrodes have different chemical potentials, dictated by the chemistry that occurs at each. When these electrodes are connected by means of an external device, electrons spontaneously flow from the more negative to the more positive potential. Ions are transported through the electrolyte, maintaining the charge balance, and electrical energy can be tapped by the external circuit. In secondary, or rechargeable, batteries, a larger voltage applied in the opposite direction can cause the battery to recharge. The amount of electrical energy per mass or volume that a battery can deliver is a function of the cell's voltage and capacity, which are dependent on the …",
"title": ""
}
] | [
{
"docid": "968ee8726afb8cc82d629ac8afabf3db",
"text": "Online communities are increasingly important to organizations and the general public, but there is little theoretically based research on what makes some online communities more successful than others. In this article, we apply theory from the field of social psychology to understand how online communities develop member attachment, an important dimension of community success. We implemented and empirically tested two sets of community features for building member attachment by strengthening either group identity or interpersonal bonds. To increase identity-based attachment, we gave members information about group activities and intergroup competition, and tools for group-level communication. To increase bond-based attachment, we gave members information about the activities of individual members and interpersonal similarity, and tools for interpersonal communication. Results from a six-month field experiment show that participants’ visit frequency and self-reported attachment increased in both conditions. Community features intended to foster identity-based attachment had stronger effects than features intended to foster bond-based attachment. Participants in the identity condition with access to group profiles and repeated exposure to their group’s activities visited their community twice as frequently as participants in other conditions. The new features also had stronger effects on newcomers than on old-timers. This research illustrates how theory from the social science literature can be applied to gain a more systematic understanding of online communities and how theory-inspired features can improve their success. 1",
"title": ""
},
{
"docid": "c3271548bf0c90541153e629dc298d61",
"text": "A number of recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large dataset can be adopted as a universal image descriptor, and that doing so leads to impressive performance at a range of image classification tasks. Most of these studies, if not all, adopt activations of the fully-connected layer of a DCNN as the image or region representation and it is believed that convolutional layer activations are less discriminative. This paper, however, advocates that if used appropriately, convolutional layer activations constitute a powerful image representation. This is achieved by adopting a new technique proposed in this paper called cross-convolutional-layer pooling. More specifically, it extracts subarrays of feature maps of one convolutional layer as local features, and pools the extracted features with the guidance of the feature maps of the successive convolutional layer. Compared with existing methods that apply DCNNs in the similar local feature setting, the proposed method avoids the input image style mismatching issue which is usually encountered when applying fully connected layer activations to describe local regions. Also, the proposed method is easier to implement since it is codebook free and does not have any tuning parameters. By applying our method to four popular visual classification tasks, it is demonstrated that the proposed method can achieve comparable or in some cases significantly better performance than existing fully-connected layer based image representations.",
"title": ""
},
{
"docid": "92a0fb602276952962762b07e7cd4d2b",
"text": "Representation of video is a vital problem in action recognition. This paper proposes Stacked Fisher Vectors (SFV), a new representation with multi-layer nested Fisher vector encoding, for action recognition. In the first layer, we densely sample large subvolumes from input videos, extract local features, and encode them using Fisher vectors (FVs). The second layer compresses the FVs of subvolumes obtained in previous layer, and then encodes them again with Fisher vectors. Compared with standard FV, SFV allows refining the representation and abstracting semantic information in a hierarchical way. Compared with recent mid-level based action representations, SFV need not to mine discriminative action parts but can preserve mid-level information through Fisher vector encoding in higher layer. We evaluate the proposed methods on three challenging datasets, namely Youtube, J-HMDB, and HMDB51. Experimental results demonstrate the effectiveness of SFV, and the combination of the traditional FV and SFV outperforms stateof-the-art methods on these datasets with a large margin.",
"title": ""
},
{
"docid": "7f0a721287ed05c67c5ecf1206bab4e6",
"text": "This study underlines the value of the brand personality and its influence on consumer’s decision making, through relational variables. An empirical study, in which 380 participants have received an SMS ad, confirms that brand personality does actually influence brand trust, brand attachment and brand commitment. The levels of brand sensitivity and involvement have also an impact on the brand personality and on its related variables.",
"title": ""
},
{
"docid": "274d24f2e061eea92a2030e93c640e27",
"text": "Traditional convolutional layers extract features from patches of data by applying a non-linearity on an affine function of the input. We propose a model that enhances this feature extraction process for the case of sequential data, by feeding patches of the data into a recurrent neural network and using the outputs or hidden states of the recurrent units to compute the extracted features. By doing so, we exploit the fact that a window containing a few frames of the sequential data is a sequence itself and this additional structure might encapsulate valuable information. In addition, we allow for more steps of computation in the feature extraction process, which is potentially beneficial as an affine function followed by a non-linearity can result in too simple features. Using our convolutional recurrent layers, we obtain an improvement in performance in two audio classification tasks, compared to traditional convolutional layers.",
"title": ""
},
{
"docid": "3920597ba84564e1928773e1f22cd6d4",
"text": "Neuroelectric oscillations reflect rhythmic shifting of neuronal ensembles between high and low excitability states. In natural settings, important stimuli often occur in rhythmic streams, and when oscillations entrain to an input rhythm their high excitability phases coincide with events in the stream, effectively amplifying neuronal input responses. When operating in a 'rhythmic mode', attention can use these differential excitability states as a mechanism of selection by simply enforcing oscillatory entrainment to a task-relevant input stream. When there is no low-frequency rhythm that oscillations can entrain to, attention operates in a 'continuous mode', characterized by extended increase in gamma synchrony. We review the evidence for early sensory selection by oscillatory phase-amplitude modulations, its mechanisms and its perceptual and behavioral consequences.",
"title": ""
},
{
"docid": "9a7f9ecf4dafaaaee2a76d49b51c545e",
"text": "Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).",
"title": ""
},
{
"docid": "69f72b8eadadba733f240fd652ca924e",
"text": "We address the problem of finding descriptive explanations of facts stored in a knowledge graph. This is important in high-risk domains such as healthcare, intelligence, etc. where users need additional information for decision making and is especially crucial for applications that rely on automatically constructed knowledge bases where machine learned systems extract facts from an input corpus and working of the extractors is opaque to the end-user. We follow an approach inspired from information retrieval and propose a simple and efficient, yet effective solution that takes into account passage level as well as document level properties to produce a ranked list of passages describing a given input relation. We test our approach using Wikidata as the knowledge base and Wikipedia as the source corpus and report results of user studies conducted to study the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "fda40e94b771e6ac4d0390236fd4eb56",
"text": "How does users’ freedom of choice, or the lack thereof, affect interface preferences? The research reported in this article approaches this question from two theoretical perspectives. The first of these argues that an interface with a dominant market share benefits from the absence of competition because users acquire skills that are specific to that particular interface, which in turn reduces the probability that they will switch to a new competitor interface in the future. By contrast, the second perspective proposes that the advantage that a market leader has in being able to install a set of non-transferable skills in its user base is offset by a psychological force that causes humans to react against perceived constraints on their freedom of choice. We test a research model that incorporates the key predictions of these two theoretical perspectives in an experiment involving consequential interface choices. We find strong support for the second perspective, which builds upon the theory of psychological reactance.",
"title": ""
},
{
"docid": "ae527d90981c371c4807799802dbc5a8",
"text": "We present our efforts to deploy mobile robots in office environments, focusing in particular on the challenge of planning a schedule for a robot to accomplish user-requested actions. We concretely aim to make our CoBot mobile robots available to execute navigational tasks requested by users, such as telepresence, and picking up and delivering messages or objects at different locations. We contribute an efficient web-based approach in which users can request and schedule the execution of specific tasks. The scheduling problem is converted to a mixed integer programming problem. The robot executes the scheduled tasks using a synthetic speech and touch-screen interface to interact with users, while allowing users to follow the task execution online. Our robot uses a robust Kinect-based safe navigation algorithm, moves fully autonomously without the need to be chaperoned by anyone, and is robust to the presence of moving humans, as well as non-trivial obstacles, such as legged chairs and tables. Our robots have already performed 15km of autonomous service tasks. Introduction and Related Work We envision a system in which autonomous mobile robots robustly perform service tasks in indoor environments. The robots perform tasks which are requested by building residents over the web, such as delivering mail, fetching coffee, or guiding visitors. To fulfill the users’ requests, we must plan a schedule of when the robot will execute each task in accordance with the constraints specified by the users. Many efforts have used the web to access robots, including the early examples of the teleoperation of a robotic arm (Goldberg et al. 1995; Taylor and Trevelyan 1995) and interfacing with a mobile robot (e.g, (Simmons et al. 1997; Siegwart and Saucy 1999; Saucy and Mondada 2000; Schulz et al. 2000)), among others. The robot Xavier (Simmons et al. 1997; 2000) allowed users to make requests over the web for the robot to go to specific places, and other mobile robots soon followed (Siegwart and Saucy 1999; Grange, Fong, and Baur 2000; Saucy and Mondada 2000; Schulz et al. 2000). The RoboCup@Home initiative (Visser and Burkhard 2007) provides competition setups for indoor Copyright © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: CoBot-2, an omnidirectional mobile robot for indoor users. service autonomous robots, with an increasingly wide scope of challenges focusing on robot autonomy and verbal interaction with users. In this work, we present our architecture to effectively make a fully autonomous indoor service robot available to general users. We focus on the problem of planning a schedule for the robot, and present a mixed integer linear programming solution for planning a schedule. We ground our work on the CoBot-2 platform1, shown in Figure 1. CoBot-2 autonomously localizes and navigates in a multi-floor office environment while effectively avoiding obstacles (Biswas and Veloso 2010). The robot carries a variety of sensing and computing devices, including a camera, a Kinect depthcamera, a Hokuyo LIDAR, a touch-screen tablet, microphones, speakers, and wireless communication. CoBot-2 executes tasks sent by users over the web, and we have devised a user-friendly web interface that allows users to specify tasks. Currently, the robot executes three types of tasks: a GoToRoom task where the robot visits a location, a Telepresence task where the robot goes to a location CoBot-2 was designed and built by Michael Licitra, mlicitra@cmu.edu, as a scaled-up version of the CMDragons small-size soccer robots, also designed and built by him. 27 Automated Action Planning for Autonomous Mobile Robots: Papers from the 2011 AAAI Workshop (WS-11-09)",
"title": ""
},
{
"docid": "64753b3c47e52ff6f1760231dc13cd63",
"text": "Theatrical improvisation (impro or improv) is a demanding form of live, collaborative performance. Improv is a humorous and playful artform built on an open-ended narrative structure which simultaneously celebrates effort and failure. It is thus an ideal test bed for the development and deployment of interactive artificial intelligence (AI)-based conversational agents, or artificial improvisors. This case study introduces an improv show experiment featuring human actors and artificial improvisors. We have previously developed a deeplearning-based artificial improvisor, trained on movie subtitles, that can generate plausible, context-based, lines of dialogue suitable for theatre (Mathewson and Mirowski 2017b). In this work, we have employed it to control what a subset of human actors say during an improv performance. We also give human-generated lines to a different subset of performers. All lines are provided to actors with headphones and all performers are wearing headphones. This paper describes a Turing test, or imitation game, taking place in a theatre, with both the audience members and the performers left to guess who is a human and who is a machine. In order to test scientific hypotheses about the perception of humans versus machines we collect anonymous feedback from volunteer performers and audience members. Our results suggest that rehearsal increases proficiency and possibility to control events in the performance. That said, consistency with real world experience is limited by the interface and the mechanisms used to perform the show. We also show that human-generated lines are shorter, more positive, and have less difficult words with more grammar and spelling mistakes than the artificial improvisor generated lines.",
"title": ""
},
{
"docid": "4f7fdd852f520f6928eeb69b3d0d1632",
"text": "Hadoop MapReduce is a popular framework for distributed storage and processing of large datasets and is used for big data analytics. It has various configuration parameters which play an important role in deciding the performance i.e., the execution time of a given big data processing job. Default values of these parameters do not result in good performance and therefore it is important to tune them. However, there is inherent difficulty in tuning the parameters due to two important reasons - first, the parameter search space is large and second, there are cross-parameter interactions. Hence, there is a need for a dimensionality-free method which can automatically tune the configuration parameters by taking into account the cross-parameter dependencies. In this paper, we propose a novel Hadoop parameter tuning methodology, based on a noisy gradient algorithm known as the simultaneous perturbation stochastic approximation (SPSA). The SPSA algorithm tunes the selected parameters by directly observing the performance of the Hadoop MapReduce system. The approach followed is independent of parameter dimensions and requires only 2 observations per iteration while tuning. We demonstrate the effectiveness of our methodology in achieving good performance on popular Hadoop benchmarks namely Grep, Bigram, Inverted Index, Word Co-occurrence and Terasort. Our method, when tested on a 25 node Hadoop cluster shows 45-66% decrease in execution time of Hadoop jobs on an average, when compared to prior methods. Further, our experiments also indicate that the parameters tuned by our method are resilient to changes in number of cluster nodes, which makes our method suitable to optimize Hadoop when it is provided as a service on the cloud.",
"title": ""
},
{
"docid": "307c8b04c447757f1bbcc5bf9976f423",
"text": "BACKGROUND\nChemical and biomedical Named Entity Recognition (NER) is an essential prerequisite task before effective text mining can begin for biochemical-text data. Exploiting unlabeled text data to leverage system performance has been an active and challenging research topic in text mining due to the recent growth in the amount of biomedical literature. We present a semi-supervised learning method that efficiently exploits unlabeled data in order to incorporate domain knowledge into a named entity recognition model and to leverage system performance. The proposed method includes Natural Language Processing (NLP) tasks for text preprocessing, learning word representation features from a large amount of text data for feature extraction, and conditional random fields for token classification. Other than the free text in the domain, the proposed method does not rely on any lexicon nor any dictionary in order to keep the system applicable to other NER tasks in bio-text data.\n\n\nRESULTS\nWe extended BANNER, a biomedical NER system, with the proposed method. This yields an integrated system that can be applied to chemical and drug NER or biomedical NER. We call our branch of the BANNER system BANNER-CHEMDNER, which is scalable over millions of documents, processing about 530 documents per minute, is configurable via XML, and can be plugged into other systems by using the BANNER Unstructured Information Management Architecture (UIMA) interface. BANNER-CHEMDNER achieved an 85.68% and an 86.47% F-measure on the testing sets of CHEMDNER Chemical Entity Mention (CEM) and Chemical Document Indexing (CDI) subtasks, respectively, and achieved an 87.04% F-measure on the official testing set of the BioCreative II gene mention task, showing remarkable performance in both chemical and biomedical NER. BANNER-CHEMDNER system is available at: https://bitbucket.org/tsendeemts/banner-chemdner.",
"title": ""
},
{
"docid": "6cc3476cbb294ba2b6e95b962ff7c5d6",
"text": "Recent advances in position localization techniques have fundamentally enhanced social networking services, allowing users to share their locations and location-related content, such as geo-tagged photos and notes. We refer to these social networks as location-based social networks (LBSNs). Location data both bridges the gap between the physical and digital worlds and enables a deeper understanding of user preferences and behavior. This addition of vast geospatial datasets has stimulated research into novel recommender systems that seek to facilitate users’ travels and social interactions. In this paper, we offer a systematic review of this research, summarizing the contributions of individual efforts and exploring their relations. We discuss the new properties and challenges that location brings to recommendation systems for LBSNs. We present a comprehensive survey of recommender systems for LBSNs, analyzing 1) the data source used, 2) the methodology employed to generate a recommendation, and 3) the objective of the recommendation. We propose three taxonomies that partition the recommender systems according to the properties listed above. First, we categorize the recommender systems by the objective of the recommendation, which can include locations, users, activities, or social media.Second, we categorize the recommender systems by the methodologies employed, including content-based, link analysis-based, and collaborative filtering-based methodologies. Third, we categorize the systems by the data sources used, including user profiles, user online histories, and user location histories. For each category, we summarize the goals and contributions of each system and highlight one representative research effort. Further, we provide comparative analysis of the recommendation systems within each category. Finally, we discuss methods of evaluation for these recommender systems and point out promising research topics for future work. This article presents a panorama of the recommendation systems in location-based social networks with a balanced depth, facilitating research into this important research theme.",
"title": ""
},
{
"docid": "7804d1c4ec379ed47d45917786946b2f",
"text": "Data mining technology has been applied to library management. In this paper, Boustead College Library Information Management System in the history of circulation records, the reader information and collections as a data source, using the Microsoft SQL Server 2005 as a data mining tool, applying data mining algorithm as cluster, association rules and time series to identify characteristics of the reader to borrow in order to achieve individual service.",
"title": ""
},
{
"docid": "742c7ccfc1bc0f5150b47683fbfd455e",
"text": "Detailed facial performance geometry can be reconstructed using dense camera and light setups in controlled studios. However, a wide range of important applications cannot employ these approaches, including all movie productions shot from a single principal camera. For post-production, these require dynamic monocular face capture for appearance modification. We present a new method for capturing face geometry from monocular video. Our approach captures detailed, dynamic, spatio-temporally coherent 3D face geometry without the need for markers. It works under uncontrolled lighting, and it successfully reconstructs expressive motion including high-frequency face detail such as folds and laugh lines. After simple manual initialization, the capturing process is fully automatic, which makes it versatile, lightweight and easy-to-deploy. Our approach tracks accurate sparse 2D features between automatically selected key frames to animate a parametric blend shape model, which is further refined in pose, expression and shape by temporally coherent optical flow and photometric stereo. We demonstrate performance capture results for long and complex face sequences captured indoors and outdoors, and we exemplify the relevance of our approach as an enabling technology for model-based face editing in movies and video, such as adding new facial textures, as well as a step towards enabling everyone to do facial performance capture with a single affordable camera.",
"title": ""
},
{
"docid": "4f43cd8225c70c0328ea4a971abc0e2f",
"text": "Home security system is needed for convenience and safety. This system invented to keep home safe from intruder. In this work, we present the design and implementation of a GSM based wireless home security system. which take a very less power. The system is a wireless home network which contains a GSM modem and magnet with relay which are door security nodes. The system can response rapidly as intruder detect and GSM module will do alert home owner. This security system for alerting a house owner wherever he will. In this system a relay and magnet installed at entry point to a precedence produce a signal through a public telecom network and sends a message or redirect a call that that tells about your home update or predefined message which is embedded in microcontroller. Suspected activities are conveyed to remote user through SMS or Call using GSM technology.",
"title": ""
},
{
"docid": "19700a52f05178ea1c95d576f050f57d",
"text": "With the progress of mobile devices and wireless broadband, a new eMarket platform, termed spatial crowdsourcing is emerging, which enables workers (aka crowd) to perform a set of spatial tasks (i.e., tasks related to a geographical location and time) posted by a requester. In this paper, we study a version of the spatial crowd-sourcing problem in which the workers autonomously select their tasks, called the worker selected tasks (WST) mode. Towards this end, given a worker, and a set of tasks each of which is associated with a location and an expiration time, we aim to find a schedule for the worker that maximizes the number of performed tasks. We first prove that this problem is NP-hard. Subsequently, for small number of tasks, we propose two exact algorithms based on dynamic programming and branch-and-bound strategies. Since the exact algorithms cannot scale for large number of tasks and/or limited amount of resources on mobile platforms, we also propose approximation and progressive algorithms. We conducted a thorough experimental evaluation on both real-world and synthetic data to compare the performance and accuracy of our proposed approaches.",
"title": ""
},
{
"docid": "2effb3276d577d961f6c6ad18a1e7b3e",
"text": "This paper extends the recovery of structure and motion to im age sequences with several independently moving objects. The mot ion, structure, and camera calibration are all a-priori unknown. The fundamental constraint that we introduce is that multiple motions must share the same camer parameters. Existing work on independent motions has not employed this constr ai t, and therefore has not gained over independent static-scene reconstructi ons. We show how this constraint leads to several new results in st ructure and motion recovery, where Euclidean reconstruction becomes pos ible in the multibody case, when it was underconstrained for a static scene. We sho w how to combine motions of high-relief, low-relief and planar objects. Add itionally we show that structure and motion can be recovered from just 4 points in th e uncalibrated, fixed camera, case. Experiments on real and synthetic imagery demonstrate the v alidity of the theory and the improvement in accuracy obtained using multibody an alysis.",
"title": ""
},
{
"docid": "61ecbc652cf9f57136e8c1cd6fed2fb0",
"text": "Recent advancements in digital technology have attracted the interest of educators and researchers to develop technology-assisted inquiry-based learning environments in the domain of school science education. Traditionally, school science education has followed deductive and inductive forms of inquiry investigation, while the abductive form of inquiry has previously been sparsely explored in the literature related to computers and education. We have therefore designed a mobile learning application ‘ThinknLearn’, which assists high school students in generating hypotheses during abductive inquiry investigations. The M3 evaluation framework was used to investigate the effectiveness of using ‘ThinknLearn’ to facilitate student learning. The results indicated in this paper showed improvements in the experimental group’s learning performance as compared to a control group in pre-post tests. In addition, the experimental group also maintained this advantage during retention tests as well as developing positive attitudes toward mobile learning. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | scidocsrr |
caa8433540b6133b9466e0583701db74 | Exploring capturable everyday memory for autobiographical authentication | [
{
"docid": "2a0de2b93a6a227380264e7bc6cac094",
"text": "The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures as passwords. In this paper, we conduct a comprehensive survey of the existing graphical password techniques. We classify these techniques into two categories: recognition-based and recall-based approaches. We discuss the strengths and limitations of each method and point out the future research directions in this area. We also try to answer two important questions: \"Are graphical passwords as secure as text-based passwords?\"; \"What are the major design and implementation issues for graphical passwords?\" This survey will be useful for information security researchers and practitioners who are interested in finding an alternative to text-based authentication methods",
"title": ""
},
{
"docid": "46a66d6d3d4ad927deb96d8d15af6669",
"text": "Security questions (or challenge questions) are commonly used to authenticate users who have lost their passwords. We examined the password retrieval mechanisms for a number of personal banking websites, and found that many of them rely in part on security questions with serious usability and security weaknesses. We discuss patterns in the security questions we observed. We argue that today's personal security questions owe their strength to the hardness of an information-retrieval problem. However, as personal information becomes ubiquitously available online, the hardness of this problem, and security provided by such questions, will likely diminish over time. We supplement our survey of bank security questions with a small user study that supplies some context for how such questions are used in practice.",
"title": ""
}
] | [
{
"docid": "c4fa73bd2d6b06f4655eeacaddf3b3a7",
"text": "In recent years, the robotic research area has become extremely prolific in terms of wearable active exoskeletons for human body motion assistance, with the presentation of many novel devices, for upper limbs, lower limbs, and the hand. The hand shows a complex morphology, a high intersubject variability, and offers limited space for physical interaction with a robot: as a result, hand exoskeletons usually are heavy, cumbersome, and poorly usable. This paper introduces a novel device designed on the basis of human kinematic compatibility, wearability, and portability criteria. This hand exoskeleton, briefly HX, embeds several features as underactuated joints, passive degrees of freedom ensuring adaptability and compliance toward the hand anthropometric variability, and an ad hoc design of self-alignment mechanisms to absorb human/robot joint axes misplacement, and proposes a novel mechanism for the thumb opposition. The HX kinematic design and actuation are discussed together with theoretical and experimental data validating its adaptability performances. Results suggest that HX matches the self-alignment design goal and is then suited for close human-robot interaction.",
"title": ""
},
{
"docid": "67261cd9b1d71b57cb53766b06e157e4",
"text": "Automatically recognizing rear light signals of front vehicles can significantly improve driving safety by automatic alarm and taking actions proactively to prevent rear-end collisions and accidents. Much previous research only focuses on detecting brake signals at night. In this paper, we present the design and implementation of a robust hierarchical framework for detecting taillights of vehicles and estimating alert signals (turning and braking) in the daytime. The three-layer structure of the vision-based framework can obviously reduce both false positives and false negatives of taillight detection. Comparing to other existing work addressing nighttime detection, the proposed method is capable of recognizing taillight signals under different illumination circumstances. By carrying out contrast experiments with existing state-of-the-art methods, the results show the high detection rate of the framework in different weather conditions during the daytime.",
"title": ""
},
{
"docid": "77f3dfeba56c3731fda1870ce48e1aca",
"text": "The organicist view of society is updated by incorporating concepts from cybernetics, evolutionary theory, and complex adaptive systems. Global society can be seen as an autopoietic network of self-producing components, and therefore as a living system or ‘superorganism’. Miller's living systems theory suggests a list of functional components for society's metabolism and nervous system. Powers' perceptual control theory suggests a model for a distributed control system implemented through the market mechanism. An analysis of the evolution of complex, networked systems points to the general trends of increasing efficiency, differentiation and integration. In society these trends are realized as increasing productivity, decreasing friction, increasing division of labor and outsourcing, and increasing cooperativity, transnational mergers and global institutions. This is accompanied by increasing functional autonomy of individuals and organisations and the decline of hierarchies. The increasing complexity of interactions and instability of certain processes caused by reduced friction necessitate a strengthening of society's capacity for information processing and control, i.e. its nervous system. This is realized by the creation of an intelligent global computer network, capable of sensing, interpreting, learning, thinking, deciding and initiating actions: the ‘global brain’. Individuals are being integrated ever more tightly into this collective intelligence. Although this image may raise worries about a totalitarian system that restricts individual initiaSocial Evolution & History / March 2007 58 tive, the superorganism model points in the opposite direction, towards increasing freedom and diversity. The model further suggests some specific futurological predictions for the coming decades, such as the emergence of an automated distribution network, a computer immune system, and a global consensus about values and standards.",
"title": ""
},
{
"docid": "4248d4620096f5e4520a8f2d5ace2b63",
"text": "With rapid increase in internet traffic over last few years due to the use of variety of internet applications, the area of IP traffic classification becomes very significant from the point of view of various internet service providers and other governmental and private organizations. Now days, traditional IP traffic classification techniques such as port number based and payload based direct packet inspection techniques are seldom used because of use of dynamic port number instead of well-known port number in packet headers and various encryption techniques which inhibit inspection of packet payload. Current trends are use of machine learning (ML) techniques for this classification. In this research paper, real time internet traffic dataset has been developed using packet capturing tool and then using attribute selection algorithms, a reduced feature dataset has been developed. After that, five ML algorithms MLP, RBF, C4.5, Bayes Net and Naïve Bayes are used for IP traffic classification with these datasets. This experimental analysis shows that Bayes Net and C4.5 are effective ML techniques for IP traffic classification with accuracy in the range of 94 %.",
"title": ""
},
{
"docid": "c89d41581dfbb30a12a9d1b7f189d6d8",
"text": "Relational phrases (e.g., “got married to”) and their hypernyms (e.g., “is a relative of”) are central for many tasks including question answering, open information extraction, paraphrasing, and entailment detection. This has motivated the development of several linguistic resources (e.g. DIRT, PATTY, and WiseNet) which systematically collect and organize relational phrases. These resources have demonstrable practical benefits, but are each limited due to noise, sparsity, or size. We present a new general-purpose method, RELLY, for constructing a large hypernymy graph of relational phrases with high-quality subsumptions using collective probabilistic programming techniques. Our graph induction approach integrates small highprecision knowledge bases together with large automatically curated resources, and reasons collectively to combine these resources into a consistent graph. Using RELLY, we construct a high-coverage, high-precision hypernymy graph consisting of 20K relational phrases and 35K hypernymy links. Our evaluation indicates a hypernymy link precision of 78%, and demonstrates the value of this resource for a document-relevance ranking task.",
"title": ""
},
{
"docid": "bb853c369f37d2d960d6b312f80cfa98",
"text": "The purpose of this platform is to support research and education goals in human-robot interaction and mobile manipulation with applications that require the integration of these abilities. In particular, our research aims to develop personal robots that work with people as capable teammates to assist in eldercare, healthcare, domestic chores, and other physical tasks that require robots to serve as competent members of human-robot teams. The robot’s small, agile design is particularly well suited to human-robot interaction and coordination in human living spaces. Our collaborators include the Laboratory for Perceptual Robotics at the University of Massachusetts at Amherst, Xitome Design, Meka Robotics, and digitROBOTICS.",
"title": ""
},
{
"docid": "abe0205896b0edb31e1a527456b33184",
"text": "MouseLight is a spatially-aware standalone mobile projector with the form factor of a mouse that can be used in combination with digital pens on paper. By interacting with the projector and the pen bimanually, users can visualize and modify the virtually augmented contents on top of the paper, and seamlessly transition between virtual and physical information. We present a high fidelity hardware prototype of the system and demonstrate a set of novel interactions specifically tailored to the unique properties of MouseLight. MouseLight differentiates itself from related systems such as PenLight in two aspects. First, MouseLight presents a rich set of bimanual interactions inspired by the ToolGlass interaction metaphor, but applied to physical paper. Secondly, our system explores novel displaced interactions, that take advantage of the independent input and output that is spatially aware of the underneath paper. These properties enable users to issue remote commands such as copy and paste or search. We also report on a preliminary evaluation of the system which produced encouraging observations and feedback.",
"title": ""
},
{
"docid": "fef4383a5a06687636ba4001ab0e510c",
"text": "In this paper, a depth camera-based novel approach for human activity recognition is presented using robust depth silhouettes context features and advanced Hidden Markov Models (HMMs). During HAR framework, at first, depth maps are processed to identify human silhouettes from noisy background by considering frame differentiation constraints of human body motion and compute depth silhouette area for each activity to track human movements in a scene. From the depth silhouettes context features, temporal frames information are computed for intensity differentiation measurements, depth history features are used to store gradient orientation change in overall activity sequence and motion difference features are extracted for regional motion identification. Then, these features are processed by Principal component analysis for dimension reduction and kmean clustering for code generation to make better activity representation. Finally, we proposed a new way to model, train and recognize different activities using advanced HMM. Each activity has been chosen with the highest likelihood value. Experimental results show superior recognition rate, resulting up to the mean recognition of 57.69% over the state of the art methods for fifteen daily routine activities using IM-Daily Depth Activity dataset. In addition, MSRAction3D dataset also showed some promising results.",
"title": ""
},
{
"docid": "cbbd8c44de7e060779ed60c6edc31e3c",
"text": "This letter presents a compact broadband microstrip-line-fed sleeve monopole antenna for application in the DTV system. The design of meandering the monopole into a compact structure is applied for size reduction. By properly selecting the length and spacing of the sleeve, the broadband operation for the proposed design can be achieved, and the obtained impedance bandwidth covers the whole DTV (470862 MHz) band. Most importantly, the matching condition over a wide frequency range can be performed well even when a small ground-plane length is used; meanwhile, a small variation in the impedance bandwidth is observed for the ground-plane length varied in a great range.",
"title": ""
},
{
"docid": "a67df1737ca4e5cb41fe09ccb57c0e88",
"text": "Generation of electricity from solar energy has gained worldwide acceptance due to its abundant availability and eco-friendly nature. Even though the power generated from solar looks to be attractive; its availability is subjected to variation owing to many factors such as change in irradiation, temperature, shadow etc. Hence, extraction of maximum power from solar PV using Maximum Power Point Tracking (MPPT) method was the subject of study in the recent past. Among many methods proposed, Hill Climbing and Incremental Conductance MPPT methods were popular in reaching Maximum Power under constant irradiation. However, these methods show large steady state oscillations around MPP and poor dynamic performance when subjected to change in environmental conditions. On the other hand, bioinspired algorithms showed excellent characteristics when dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations. Hence, in this paper an attempt is made by applying modifications to Particle Swarm Optimization technique, with emphasis on initial value selection, for Maximum Power Point Tracking. The key features of this method include ability to track the global peak power accurately under change in environmental condition with almost zero steady state oscillations, faster dynamic response and easy implementation. Systematic evaluation has been carried out for different partial shading conditions and finally the results obtained are compared with existing methods. In addition, simulations results are validated via built-in hardware prototype. © 2015 Published by Elsevier B.V. 37 38 39 40 41 42 43 44 45 46 47 48 . Introduction Ever growing energy demand by mankind and the limited availbility of resources remain as a major challenge to the power sector ndustry. The need for renewable energy resources has been augented in large scale and aroused due to its huge availability nd pollution free operation. Among the various renewable energy esources, solar energy has gained worldwide recognition because f its minimal maintenance, zero noise and reliability. Because of he aforementioned advantages; solar energy have been widely sed for various applications, but not limited to, such as megawatt cale power plants, water pumping, solar home systems, commuPlease cite this article in press as: R. Venugopalan, et al., Modified Parti Tracking for uniform and under partial shading condition, Appl. Soft C ication satellites, space vehicles and reverse osmosis plants [1]. owever, power generation using solar energy still remain uncerain, despite of all the efforts, due to various factors such as poor ∗ Corresponding author at: SELECT, VIT University, Vellore, Tamilnadu 632014, ndia. Tel.: +91 9600117935; fax: +91 9490113830. E-mail address: sudhakar.babu2013@vit.ac.in (T. Sudhakarbabu). ttp://dx.doi.org/10.1016/j.asoc.2015.05.029 568-4946/© 2015 Published by Elsevier B.V. 49 50 51 52 conversion efficiency, high installation cost and reduced power output under varying environmental conditions. Further, the characteristics of solar PV are non-linear in nature imposing constraints on solar power generation. Therefore, to maximize the power output from solar PV and to enhance the operating efficiency of the solar photovoltaic system, Maximum Power Point Tracking (MPPT) algorithms are essential [2]. Various MPPT algorithms [3–5] have been investigated and reported in the literature and the most popular ones are Fractional Open Circuit Voltage [6–8], Fractional Short Circuit Current [9–11], Perturb and Observe (P&O) [12–17], Incremental Conductance (Inc. Cond.) [18–22], and Hill Climbing (HC) algorithm [23–26]. In fractional open circuit voltage, and fractional short circuit current method; its performance depends on an approximate linear correlation between Vmpp, Voc and Impp, Isc values. However, the above relation is not practically valid; hence, exact value of Maximum cle Swarm Optimization technique based Maximum Power Point omput. J. (2015), http://dx.doi.org/10.1016/j.asoc.2015.05.029 Power Point (MPP) cannot be assured. Perturb and Observe (P&O) method works with the voltage perturbation based on present and previous operating power values. Regardless of its simple structure, its efficiency principally depends on the tradeoff between the 53 54 55 56 ARTICLE IN G Model ASOC 2982 1–12 2 R. Venugopalan et al. / Applied Soft C Nomenclature IPV Current source Rs Series resistance Rp Parallel resistance VD diode voltage ID diode current I0 leakage current Vmpp voltage at maximum power point Voc open circuit voltage Impp current at maximum power point Isc short circuit current Vmpn nominal maximum power point voltage at 1000 W/m2 Npp number of parallel PV modules Nss number of series PV modules w weight factor c1 acceleration factor c2 acceleration factor pbest personal best position gbest global best position Vt thermal voltage K Boltzmann constant T temperature q electron charge Ns number of cells in series Vocn nominal open circuit voltage at 1000W/m2 G irradiation Gn nominal Irradiation Kv voltage temperature coefficient dT difference in temperature RLmin minimum value of load at output RLmax maximum value of load at output Rin internal resistance of the PV module RPVmin minimum reflective impedance of PV array RPVmax maximum reflective impedance of PV array R equivalent output load resistance t M o w t b A c M h n ( e a i p p w a u t H o i 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 o b converter efficiency racking speed and the steady state oscillations in the region of PP [15]. Incremental Conductance (Inc. Cond.) algorithm works n the principle of comparing ratios of Incremental Conductance ith instantaneous conductance and it has the similar disadvanage as that of P&O method [20,21]. HC method works alike P&O ut it is based on the perturbation of duty cycle of power converter. ll these traditional methods have the following disadvantages in ommon; reduced efficiency and steady state oscillations around PP. Realizing the above stated drawbacks; various researchers ave worked on applying certain Artificial Intelligence (AI) techiques like Neural Network (NN) [27,28] and Fuzzy Logic Control FLC) [29,30]. However, these techniques require periodic training, normous volume of data for training, computational complexity nd large memory capacity. Application of aforementioned MPPT methods for centralzed/string PV system is limited as they fail to track the global eak power under partial shading conditions. In addition, multile peaks occur in P-V curve under partial shading condition in hich the unique peak point i.e., global power peak should be ttained. However, when conventional MPPT techniques are used nder such conditions, they usually get trapped in any one of Please cite this article in press as: R. Venugopalan, et al., Modified Part Tracking for uniform and under partial shading condition, Appl. Soft C he local power peaks; drastically lowering the search efficiency. ence, to improve MPP tracking efficiency of conventional methds under PS conditions certain modifications have been proposed n Ref. [31]. Some used two stage approach to track the MPP [32]. PRESS omputing xxx (2015) xxx–xxx In the first stage, a wide search is performed which ensures that the operating point is moved closer to the global peak which is further fine-tuned in the second stage to reach the global peak value. Even though tracking efficiency has improved the method still fails to find the global maximum under all conditions. Another interesting approach is improving the Fibonacci search method for global MPP tracking [33]. Alike two stage method, this one also suffers from the same drawback that it does not guarantee accurate MPP tracking under all shaded conditions [34]. Yet another unique formulation combining DIRECT search method with P&O was put forward for global MPP tracking in Ref. [35]. Even though it is rendered effective, it is very complex and increases the computational burden. In the recent past, bio-inspired algorithms like GA, PSO and ACO have drawn considerable researcher’s attention for MPPT application; since they ensure sufficient class of accuracy while dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations [32,36–38]. Further, these methods offer various advantages such as computational simplicity, easy implementation and faster response. Among those methods, PSO method is largely discussed and widely used for solar MPPT due to the fact that it has simple structure, system independency, high adaptability and lesser number of tuning parameters. Further in PSO method, particles are allowed to move in random directions and the best values are evolved based on pbest and gbest values. This exploration process is very suitable for MPPT application. To improve the search efficiency of the conventional PSO method authors have proposed modifications to the existing algorithm. In Ref. [39], the authors have put forward an additional perception capability for the particles in search space so that best solutions are evolved with higher accuracy than PSO. However, details on implementation under partial shading condition are not discussed. Further, this method is only applicable when the entire module receive uniform insolation cannot be considered. Traditional PSO method is modified in Ref. [40] by introducing equations for velocity update and inertia. Even though the method showed better performance, use of extra coefficients in the conventional PSO search limits its advantage and increases the computational burden of the algorithm. Another approach",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "244360e0815243d6a04d64a974da1b89",
"text": "The life-history of Haplorchoides mehrai Pande & Shukla, 1976 is elucidated. The cercariae occurred in the thiarid snail Melanoides tuberculatus (Muller) collected from Chilka Lake, Orissa State. Metacercariae were found beneath the scales of Puntius sophore (Hamilton). Several species of catfishes in the lake served as definitive hosts. All stages in the life-cycle were successfully established under experimental conditions in the laboratory. The cercariae are of opisthorchioid type with a large globular and highly granular excretory bladder and seven pairs of pre-vesicular penetration glands. The adult flukes are redescribed to include details of the ventro-genital complex. Only three Indian species of the genus, i.e. H. attenuatus (Srivastava, 1935), H. pearsoni Pande & Shukla, 1976 and H. mehrai Pande & Shukla, 1976, are considered valid, and the remaining Indian species of the genus are considered as species inquirendae. The generic diagnosis of Haplorchoides is amended and the genus is included in the subfamily Haplorchiinae and the family Heterophyidae.",
"title": ""
},
{
"docid": "1d632c181e89e7d019595f2757f7ee66",
"text": "This study investigated the process by which employee perceptions of the organizational environment are related to job involvement, effort, and performance. The researchers developed an operational definition of psychological climate that was based on how employees perceive aspects of the organizational environment and interpret them in relation to their own well-being. Perceived psychological climate was then related to job involvement, effort, and performance in a path-analytic framework. Results showed that perceptions of a motivating and involving psychological climate were related to job involvement, which in turn was related to effort. Effort was also related to work performance. Results revealed that a modest but statistically significant effect of job involvement on performance became nonsignificant when effort was inserted into the model, indicating the mediating effect of effort on the relationship. The results cross-validated well across 2 samples of outside salespeople, indicating that relationships are generalizable across these different sales contexts.",
"title": ""
},
{
"docid": "9140faa8bd908e5c8d0d9b326f07e231",
"text": "The purpose of this paper is to provide a preliminary report on the rst broad-based experimental comparison of modern heuristics for the asymmetric traveling salesmen problem (ATSP). There are currently three general classes of such heuristics: classical tour construction heuristics such as Nearest Neighbor and the Greedy algorithm, local search algorithms based on re-arranging segments of the tour, as exemplied by the Kanellakis-Papadimitriou algorithm [KP80], and algorithms based on patching together the cycles in a minimum cycle cover, the best of which are variants on an algorithm proposed by Zhang [Zha93]. We test implementations of the main contenders from each class on a variety of instance types, introducing a variety of new random instance generators modeled on real-world applications of the ATSP. Among the many tentative conclusions we reach is that no single algorithm is dominant over all instance classes, although for each class the best tours are found either by Zhang's algorithm or an iterated variant on KanellakisPapadimitriou.",
"title": ""
},
{
"docid": "4f58172c8101b67b9cd544b25d09f2e2",
"text": "For years, researchers in face recognition area have been representing and recognizing faces based on subspace discriminant analysis or statistical learning. Nevertheless, these approaches are always suffering from the generalizability problem. This paper proposes a novel non-statistics based face representation approach, local Gabor binary pattern histogram sequence (LGBPHS), in which training procedure is unnecessary to construct the face model, so that the generalizability problem is naturally avoided. In this approach, a face image is modeled as a \"histogram sequence\" by concatenating the histograms of all the local regions of all the local Gabor magnitude binary pattern maps. For recognition, histogram intersection is used to measure the similarity of different LGBPHSs and the nearest neighborhood is exploited for final classification. Additionally, we have further proposed to assign different weights for each histogram piece when measuring two LGBPHSes. Our experimental results on AR and FERET face database show the validity of the proposed approach especially for partially occluded face images, and more impressively, we have achieved the best result on FERET face database.",
"title": ""
},
{
"docid": "414871ff942d8be9dbb18e0da05455ad",
"text": "We propose a detection and segmentation algorithm for the purposes of fine-grained recognition. The algorithm first detects low-level regions that could potentially belong to the object and then performs a full-object segmentation through propagation. Apart from segmenting the object, we can also `zoom in' on the object, i.e. center it, normalize it for scale, and thus discount the effects of the background. We then show that combining this with a state-of-the-art classification algorithm leads to significant improvements in performance especially for datasets which are considered particularly hard for recognition, e.g. birds species. The proposed algorithm is much more efficient than other known methods in similar scenarios. Our method is also simpler and we apply it here to different classes of objects, e.g. birds, flowers, cats and dogs. We tested the algorithm on a number of benchmark datasets for fine-grained categorization. It outperforms all the known state-of-the-art methods on these datasets, sometimes by as much as 11%. It improves the performance of our baseline algorithm by 3-4%, consistently on all datasets. We also observed more than a 4% improvement in the recognition performance on a challenging large-scale flower dataset, containing 578 species of flowers and 250,000 images.",
"title": ""
},
{
"docid": "458470e18ce2ab134841f76440cfdc2b",
"text": "Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.",
"title": ""
},
{
"docid": "6d766690805f74495c5b29b889320908",
"text": "With cloud storage services, it is commonplace for data to be not only stored in the cloud, but also shared across multiple users. However, public auditing for such shared data - while preserving identity privacy - remains to be an open challenge. In this paper, we propose the first privacy-preserving mechanism that allows public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute the verification information needed to audit the integrity of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from a third party auditor (TPA), who is still able to verify the integrity of shared data without retrieving the entire file. Our experimental results demonstrate the effectiveness and efficiency of our proposed mechanism when auditing shared data.",
"title": ""
},
{
"docid": "6c3f320eda59626bedb2aad4e527c196",
"text": "Though research on the Semantic Web has progressed at a steady pace, its promise has yet to be realized. One major difficulty is that, by its very nature, the Semantic Web is a large, uncensored system to which anyone may contribute. This raises the question of how much credence to give each source. We cannot expect each user to know the trustworthiness of each source, nor would we want to assign top-down or global credibility values due to the subjective nature of trust. We tackle this problem by employing a web of trust, in which each user provides personal trust values for a small number of other users. We compose these trusts to compute the trust a user should place in any other user in the network. A user is not assigned a single trust rank. Instead, different users may have different trust values for the same user. We define properties for combination functions which merge such trusts, and define a class of functions for which merging may be done locally while maintaining these properties. We give examples of specific functions and apply them to data from Epinions and our BibServ bibliography server. Experiments confirm that the methods are robust to noise, and do not put unreasonable expectations on users. We hope that these methods will help move the Semantic Web closer to fulfilling its promise.",
"title": ""
},
{
"docid": "678d3dccdd77916d0c653d88785e1300",
"text": "BACKGROUND\nFatigue is one of the common complaints of multiple sclerosis (MS) patients, and its treatment is relatively unclear. Ginseng is one of the herbal medicines possessing antifatigue properties, and its administration in MS for such a purpose has been scarcely evaluated. The purpose of this study was to evaluate the efficacy and safety of ginseng in the treatment of fatigue and the quality of life of MS patients.\n\n\nMETHODS\nEligible female MS patients were randomized in a double-blind manner, to receive 250-mg ginseng or placebo twice daily over 3 months. Outcome measures included the Modified Fatigue Impact Scale (MFIS) and the Iranian version of the Multiple Sclerosis Quality Of Life Questionnaire (MSQOL-54). The questionnaires were used after randomization, and again at the end of the study.\n\n\nRESULTS\nOf 60 patients who were enrolled in the study, 52 (86%) subjects completed the trial with good drug tolerance. Statistical analysis showed better effects for ginseng than the placebo as regards MFIS (p = 0.046) and MSQOL (p ≤ 0.0001) after 3 months. No serious adverse events were observed during follow-up.\n\n\nCONCLUSIONS\nThis study indicates that 3-month ginseng treatment can reduce fatigue and has a significant positive effect on quality of life. Ginseng is probably a good candidate for the relief of MS-related fatigue. Further studies are needed to shed light on the efficacy of ginseng in this field.",
"title": ""
}
] | scidocsrr |
ef3f08e17f6ba2cfc17956b583032cf6 | Augmented reality in the smart factory: Supporting workers in an industry 4.0. environment | [
{
"docid": "d8bf55d8a2aaa1061310f3d976a87c57",
"text": "characterized by four distinguishable interface styles, each lasting for many years and optimized to the hardware available at the time. In the first period, the early 1950s and 1960s, computers were used in batch mode with punched-card input and line-printer output; there were essentially no user interfaces because there were no interactive users (although some of us were privileged to be able to do console debugging using switches and lights as our “user interface”). The second period in the evolution of interfaces (early 1960s through early 1980s) was the era of timesharing on mainframes and minicomputers using mechanical or “glass” teletypes (alphanumeric displays), when for the first time users could interact with the computer by typing in commands with parameters. Note that this era persisted even through the age of personal microcomputers with such operating systems as DOS and Unix with their command line shells. During the 1970s, timesharing and manual command lines remained deeply entrenched, but at Xerox PARC the third age of user interfaces dawned. Raster graphics-based networked workstations and “pointand-click” WIMP GUIs (graphical user interfaces based on windows, icons, menus, and a pointing device, typically a mouse) are the legacy of Xerox PARC that we’re still using today. WIMP GUIs were popularized by the Macintosh in 1984 and later copied by Windows on the PC and Motif on Unix workstations. Applications today have much the same look and feel as the early desktop applications (except for the increased “realism” achieved through the use of drop shadows for buttons and other UI widgets); the main advance lies in the shift from monochrome displays to color and in a large set of software-engineering tools for building WIMP interfaces. I find it rather surprising that the third generation of WIMP user interfaces has been so dominant for more than two decades; they are apparently sufficiently good for conventional desktop tasks that the field is stuck comfortably in a rut. I argue in this essay that the status quo does not suffice—that the newer forms of computing and computing devices available today necessitate new thinking t h e h u m a n c o n n e c t i o n Andries van Dam",
"title": ""
}
] | [
{
"docid": "d3f97e0de15ab18296e161e287890e18",
"text": "Nosocomial or hospital acquired infections threaten the survival and neurodevelopmental outcomes of infants admitted to the neonatal intensive care unit, and increase cost of care. Premature infants are particularly vulnerable since they often undergo invasive procedures and are dependent on central catheters to deliver nutrition and on ventilators for respiratory support. Prevention of nosocomial infection is a critical patient safety imperative, and invariably requires a multidisciplinary approach. There are no short cuts. Hand hygiene before and after patient contact is the most important measure, and yet, compliance with this simple measure can be unsatisfactory. Alcohol based hand sanitizer is effective against many microorganisms and is efficient, compared to plain or antiseptic containing soaps. The use of maternal breast milk is another inexpensive and simple measure to reduce infection rates. Efforts to replicate the anti-infectious properties of maternal breast milk by the use of probiotics, prebiotics, and synbiotics have met with variable success, and there are ongoing trials of lactoferrin, an iron binding whey protein present in large quantities in colostrum. Attempts to boost the immunoglobulin levels of preterm infants with exogenous immunoglobulins have not been shown to reduce nosocomial infections significantly. Over the last decade, improvements in the incidence of catheter-related infections have been achieved, with meticulous attention to every detail from insertion to maintenance, with some centers reporting zero rates for such infections. Other nosocomial infections like ventilator acquired pneumonia and staphylococcus aureus infection remain problematic, and outbreaks with multidrug resistant organisms continue to have disastrous consequences. Management of infections is based on the profile of microorganisms in the neonatal unit and community and targeted therapy is required to control the disease without leading to the development of more resistant strains.",
"title": ""
},
{
"docid": "2dc24d2ecaf2494543128f5e9e5f4864",
"text": "Design of a multiphase hybrid permanent magnet (HPM) generator for series hybrid electric vehicle (SHEV) application is presented in this paper. The proposed hybrid excitation topology together with an integral passive rectifier replaces the permanent magnet (PM) machine and active power electronics converter in hybrid/electric vehicles, facilitating the control over constant PM flux-linkage. The HPM topology includes two rotor elements: a PM and a wound field (WF) rotor with a 30% split ratio, coupled on the same shaft in one machine housing. Both rotors share a nine-phase stator that results in higher output voltage and power density when compared to three-phase design. The HPM generator design is based on a 3-kW benchmark PM machine to ensure the feasibility and validity of design tools and procedures. The WF rotor is designed to realize the same pole shape and number as in the PM section and to obtain the same flux-density in the air-gap while minimizing the WF input energy. Having designed and analyzed the machine using equivalent magnetic circuit and finite element analysis, a laboratory prototype HPM generator is built and tested with the measurements compared to predicted results confirming the designed characteristics and machine performance. The paper also presents comprehensive machine loss and mass audits.",
"title": ""
},
{
"docid": "55f95c7b59f17fb210ebae97dbd96d72",
"text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.",
"title": ""
},
{
"docid": "afdc57b5d573e2c99c73deeef3c2fd5f",
"text": "The purpose of this article is to consider oral reading fluency as an indicator of overall reading competence. We begin by examining theoretical arguments for supposing that oral reading fluency may reflect overall reading competence. We then summarize several studies substantiating this phenomenon. Next, we provide an historical analysis of the extent to which oral reading fluency has been incorporated into measurement approaches during the past century. We conclude with recommendations about the assessment of oral reading fluency for research and practice.",
"title": ""
},
{
"docid": "d7e794a106f29f5ebe917c2e7b6007eb",
"text": "In this paper, several recent theoretical conceptions of technology-mediated education are examined and a study of 2159 online learners is presented. The study validates an instrument designed to measure teaching, social, and cognitive presence indicative of a community of learners within the community of inquiry (CoI) framework [Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a textbased environment: Computer conferencing in higher education. The Internet and Higher Education, 2, 1–19; Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7–23]. Results indicate that the survey items cohere into interpretable factors that represent the intended constructs. Further it was determined through structural equation modeling that 70% of the variance in the online students’ levels of cognitive presence, a multivariate measure of learning, can be modeled based on their reports of their instructors’ skills in fostering teaching presence and their own abilities to establish a sense of social presence. Additional analysis identifies more details of the relationship between learner understandings of teaching and social presence and its impact on their cognitive presence. Implications for online teaching, policy, and faculty development are discussed. ! 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8392c5faf4c837fd06b6f50d110b6e84",
"text": "Pool of knowledge available to the mankind depends on the source of learning resources, which can vary from ancient printed documents to present electronic material. The rapid conversion of material available in traditional libraries to digital form needs a significant amount of work if we are to maintain the format and the look of the electronic documents as same as their printed counterparts. Most of the printed documents contain not only characters and its formatting but also some associated non text objects such as tables, charts and graphical objects. It is challenging to detect them and to concentrate on the format preservation of the contents while reproducing them. To address this issue, we propose an algorithm using local thresholds for word space and line height to locate and extract all categories of tables from scanned document images. From the experiments performed on 298 documents, we conclude that our algorithm has an overall accuracy of about 75% in detecting tables from the scanned document images. Since the algorithm does not completely depend on rule lines, it can detect all categories of tables in a range of scanned documents with different font types, styles and sizes to extract their formatting features. Moreover, the algorithm can be applied to locate tables in multi column layouts with small modification in layout analysis. Treating tables with their existing formatting features will tremendously help the reproducing of printed documents for reprinting and updating purposes.",
"title": ""
},
{
"docid": "48a75e28154d630da14fd3dba09d0af8",
"text": "Over the years, artificial intelligence (AI) is spreading its roots in different areas by utilizing the concept of making the computers learn and handle complex tasks that previously require substantial laborious tasks by human beings. With better accuracy and speed, AI is helping lawyers to streamline work processing. New legal AI software tools like Catalyst, Ross intelligence, and Matlab along with natural language processing provide effective quarrel resolution, better legal clearness, and superior admittance to justice and fresh challenges to conventional law firms providing legal services using leveraged cohort correlate model. This paper discusses current applications of legal AI and suggests deep learning and machine learning techniques that can be applied in future to simplify the cumbersome legal tasks.",
"title": ""
},
{
"docid": "64c156ee4171b5b84fd4eedb1d922f55",
"text": "We introduce a large computational subcategorization lexicon which includes subcategorization frame (SCF) and frequency information for 6,397 English verbs. This extensive lexicon was acquired automatically from five corpora and the Web using the current version of the comprehensive subcategorization acquisition system of Briscoe and Carroll (1997). The lexicon is provided freely for research use, along with a script which can be used to filter and build sub-lexicons suited for different natural language processing (NLP) purposes. Documentation is also provided which explains each sub-lexicon option and evaluates its accuracy.",
"title": ""
},
{
"docid": "fca63f719115e863f5245f15f6b1be50",
"text": "Model-based testing (MBT) in hardware-in-the-loop (HIL) platform is a simulation and testing environment for embedded systems, in which test design automation provided by MBT is combined with HIL methodology. A HIL platform is a testing environment in which the embedded system under testing (SUT) assumes to be operating with real-world inputs and outputs. In this paper, we focus on presenting the novel methodologies and tools that were used to conduct the validation of the MBT in HIL platform. Another novelty of the validation approach is that it aims to provide a comprehensive and many-sided process view to validating MBT and HIL related systems including different component, integration and system level testing activities. The research is based on the constructive method of the related scientific literature and testing technologies, and the results are derived through testing and validating the implemented MBT in HIL platform. The used testing process indicated that the functionality of the constructed MBT in HIL prototype platform was validated.",
"title": ""
},
{
"docid": "5d13c7c50cb43de80df7b6f02c866dab",
"text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, even in the black-box case, where the attacker is limited to solely query access. Existing black-box approaches to generating adversarial examples typically require a significant number of queries, either for training a substitute network or estimating gradients from the output scores. We introduce GenAttack, a gradient-free optimization technique which uses genetic algorithms for synthesizing adversarial examples in the black-box setting. Our experiments on the MNIST, CIFAR-10, and ImageNet datasets show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than existing approaches. For example, in our CIFAR-10 experiments, GenAttack required roughly 2,568 times less queries than the current state-of-the-art black-box attack. Furthermore, we show that GenAttack can successfully attack both the state-of-the-art ImageNet defense, ensemble adversarial training, and non-differentiable, randomized input transformation defenses. GenAttack’s success against ensemble adversarial training demonstrates that its query efficiency enables it to exploit the defense’s weakness to direct black-box attacks. GenAttack’s success against non-differentiable input transformations indicates that its gradient-free nature enables it to be applicable against defenses which perform gradient masking/obfuscation to confuse the attacker. Our results suggest that evolutionary algorithms open up a promising area of research into effective gradient-free black-box attacks.",
"title": ""
},
{
"docid": "01a70ee73571e848575ed992c1a3a578",
"text": "BACKGROUND\nNursing turnover is a major issue for health care managers, notably during the global nursing workforce shortage. Despite the often hierarchical structure of the data used in nursing studies, few studies have investigated the impact of the work environment on intention to leave using multilevel techniques. Also, differences between intentions to leave the current workplace or to leave the profession entirely have rarely been studied.\n\n\nOBJECTIVE\nThe aim of the current study was to investigate how aspects of the nurse practice environment and satisfaction with work schedule flexibility measured at different organisational levels influenced the intention to leave the profession or the workplace due to dissatisfaction.\n\n\nDESIGN\nMultilevel models were fitted using survey data from the RN4CAST project, which has a multi-country, multilevel, cross-sectional design. The data analysed here are based on a sample of 23,076 registered nurses from 2020 units in 384 hospitals in 10 European countries (overall response rate: 59.4%). Four levels were available for analyses: country, hospital, unit, and individual registered nurse. Practice environment and satisfaction with schedule flexibility were aggregated and studied at the unit level. Gender, experience as registered nurse, full vs. part-time work, as well as individual deviance from unit mean in practice environment and satisfaction with work schedule flexibility, were included at the individual level. Both intention to leave the profession and the hospital due to dissatisfaction were studied.\n\n\nRESULTS\nRegarding intention to leave current workplace, there is variability at both country (6.9%) and unit (6.9%) level. However, for intention to leave the profession we found less variability at the country (4.6%) and unit level (3.9%). Intention to leave the workplace was strongly related to unit level variables. Additionally, individual characteristics and deviance from unit mean regarding practice environment and satisfaction with schedule flexibility were related to both outcomes. Major limitations of the study are its cross-sectional design and the fact that only turnover intention due to dissatisfaction was studied.\n\n\nCONCLUSIONS\nWe conclude that measures aiming to improve the practice environment and schedule flexibility would be a promising approach towards increased retention of registered nurses in both their current workplaces and the nursing profession as a whole and thus a way to counteract the nursing shortage across European countries.",
"title": ""
},
{
"docid": "776cba62170ee8936629aabca314fd46",
"text": "While the Global Positioning System (GPS) tends to be not useful anymore in terms of precise localization once one gets into a building, Low Energy beacons might come in handy instead. Navigating free of signal reception problems throughout a building when one has never visited that place before is a challenge tackled with indoors localization. Using Bluetooth Low Energy1 (BLE) beacons (either iBeacon or Eddystone formats) is the medium to accomplish that. Indeed, different purpose oriented applications can be designed, developed and shaped towards the needs of any person in the context of a certain building. This work presents a series of post-processing filters to enhance the outcome of the estimated position applying trilateration as the main and straightforward technique to locate someone within a building. A later evaluation tries to give enough evidence around the feasibility of this indoor localization technique. A mobile app should be everything a user would need to have within a building in order to navigate inside.",
"title": ""
},
{
"docid": "b89099e9b01a83368a1ebdb2f4394eba",
"text": "Orangutans (Pongo pygmaeus and Pongo abelii) are semisolitary apes and, among the great apes, the most distantly related to humans. Raters assessed 152 orangutans on 48 personality descriptors; 140 of these orangutans were also rated on a subjective well-being questionnaire. Principal-components analysis yielded 5 reliable personality factors: Extraversion, Dominance, Neuroticism, Agreeableness, and Intellect. The authors found no factor analogous to human Conscientiousness. Among the orangutans rated on all 48 personality descriptors and the subjective well-being questionnaire, Extraversion, Agreeableness, and low Neuroticism were related to subjective well-being. These findings suggest that analogues of human, chimpanzee, and orangutan personality domains existed in a common ape ancestor.",
"title": ""
},
{
"docid": "330129cb283fac3dc4df9f0c36b1de48",
"text": "Hydrokinetic turbines convert kinetic energy of moving river or tide water into electrical energy. In this work, design considerations of river current turbines are discussed with emphasis on straight bladed Darrieus rotors. Fluid dynamic analysis is carried out to predict the performance of the rotor. Discussions on a broad range of physical and operational conditions that may impact the design scenario are also presented. In addition, a systematic design procedure along with supporting information that would aid various decision making steps are outlined and illustrated by a design example. Finally, the scope for further work is highlighted",
"title": ""
},
{
"docid": "b83e537a2c8dcd24b096005ef0cb3897",
"text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.",
"title": ""
},
{
"docid": "76e7f63fa41d6d457e6e4386ad7b9896",
"text": "A growing body of work has highlighted the challenges of identifying the stance that a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts from the debate website ConvinceMe.net, for 14 topics ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentiment-bearing terms. We present results for classifying stance on a per topic basis that range from 60% to 75%, as compared to unigram baselines that vary between 47% and 66%. Our results suggest that features and methods that take into account the dialogic context of such posts improve accuracy.",
"title": ""
},
{
"docid": "0c886080015642aa5b7c103adcd2a81d",
"text": "The problem of gauging information credibility on social networks has received considerable attention in recent years. Most previous work has chosen Twitter, the world's largest micro-blogging platform, as the premise of research. In this work, we shift the premise and study the problem of information credibility on Sina Weibo, China's leading micro-blogging service provider. With eight times more users than Twitter, Sina Weibo is more of a Facebook-Twitter hybrid than a pure Twitter clone, and exhibits several important characteristics that distinguish it from Twitter. We collect an extensive set of microblogs which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Sina Weibo. Unlike previous studies on Twitter where the labeling of rumors is done manually by the participants of the experiments, the official nature of this service ensures the high quality of the dataset. We then examine an extensive set of features that can be extracted from the microblogs, and train a classifier to automatically detect the rumors from a mixed set of true information and false information. The experiments show that some of the new features we propose are indeed effective in the classification, and even the features considered in previous studies have different implications with Sina Weibo than with Twitter. To the best of our knowledge, this is the first study on rumor analysis and detection on Sina Weibo.",
"title": ""
},
{
"docid": "95db9ce9faaf13e8ff8d5888a6737683",
"text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: boydce1@auburn.edu Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:",
"title": ""
},
{
"docid": "f8cd8b54218350fa18d4d59ca0a58a05",
"text": "This study provides conceptual and empirical arguments why an assessment of applicants' procedural knowledge about interpersonal behavior via a video-based situational judgment test might be valid for academic and postacademic success criteria. Four cohorts of medical students (N = 723) were followed from admission to employment. Procedural knowledge about interpersonal behavior at the time of admission was valid for both internship performance (7 years later) and job performance (9 years later) and showed incremental validity over cognitive factors. Mediation analyses supported the conceptual link between procedural knowledge about interpersonal behavior, translating that knowledge into actual interpersonal behavior in internships, and showing that behavior on the job. Implications for theory and practice are discussed.",
"title": ""
},
{
"docid": "019d5deed0ed1e5b50097d5dc9121cb6",
"text": "Within interactive narrative research, agency is largely considered in terms of a player's autonomy in a game, defined as theoretical agency. Rather than in terms of whether or not the player feels they have agency, their perceived agency. An effective interactive narrative needs to provide a player a level of agency that satisfies their desires and must do that without compromising its own structure. Researchers frequently turn to techniques for increasing theoretical agency to accomplish this. This paper proposes an approach to categorize and explore techniques in which a player's level of perceived agency is affected without requiring more or less theoretical agency.",
"title": ""
}
] | scidocsrr |
a590e37d84d0ca3bf95f6e43784730bc | A Survey of Modern Questions and Challenges in Feature Extraction | [
{
"docid": "f0f47ce0fc361740aedf17d6d2061e03",
"text": "In supervised learning scenarios, feature selection has be en studied widely in the literature. Selecting features in unsupervis ed learning scenarios is a much harder problem, due to the absence of class la bel that would guide the search for relevant information. And, almos t all of previous unsupervised feature selection methods are “wrapper ” techniques that require a learning algorithm to evaluate the candidate fe ture subsets. In this paper, we propose a “filter” method for feature select ion which is independent of any learning algorithm. Our method can be per formed in either supervised or unsupervised fashion. The proposed me thod is based on the observation that, in many real world classification pr oblems, data from the same class are often close to each other. The importa nce of a feature is evaluated by its power of locality preserving, or , Laplacian Score. We compare our method with data variance (unsupervised) an d Fisher score (supervised) on two data sets. Experimental re sults demonstrate the effectiveness and efficiency of our algorithm.",
"title": ""
}
] | [
{
"docid": "f4bb27786cf81892f30a01796fbbdbde",
"text": "Kiosks are increasingly being heralded as a technology through which governments, government departments and local authorities or municipalities can engage with citizens. In particular, they have attractions in their potential to bridge the digital divide. There is some evidence to suggest that the citizen uptake of kiosks and indeed other channels for e-government, such as web sites, is slow, although studies on the use of kiosks for health information provision offer some interesting perspectives on user behaviour with kiosk technology. This article argues that the delivery of e-government through kiosks presents a number of strategic challenges, which will need to be negotiated over the next few years in order that kiosk applications are successful in enhancing accessibility to and engagement with egovernment. The article suggests that this involves consideration of: the applications to be delivered through a kiosk; one stop shop service and knowledge architectures; mechanisms for citizen identification; and, the integration of kiosks within the total interface between public bodies and their communities. The article concludes by outlining development and research agendas in each of these areas.",
"title": ""
},
{
"docid": "c4ff647b5962d3d713577c16a7a9cae5",
"text": "In this paper we propose the use of an illumination invariant transform to improve many aspects of visual localisation, mapping and scene classification for autonomous road vehicles. The illumination invariant colour space stems from modelling the spectral properties of the camera and scene illumination in conjunction, and requires only a single parameter derived from the image sensor specifications. We present results using a 24-hour dataset collected using an autonomous road vehicle, demonstrating increased consistency of the illumination invariant images in comparison to raw RGB images during daylight hours. We then present three example applications of how illumination invariant imaging can improve performance in the context of vision-based autonomous vehicles: 6-DoF metric localisation using monocular cameras over a 24-hour period, life-long visual localisation and mapping using stereo, and urban scene classification in changing environments. Our ultimate goal is robust and reliable vision-based perception and navigation an attractive proposition for low-cost autonomy for road vehicles.",
"title": ""
},
{
"docid": "1f1a8f5f7612e131ce7b99c13aa4d5db",
"text": "Background subtraction can be treated as the binary classification problem of highlighting the foreground region in a video whilst masking the background region, and has been broadly applied in various vision tasks such as video surveillance and traffic monitoring. However, it still remains a challenging task due to complex scenes and for lack of the prior knowledge about the temporal information. In this paper, we propose a novel background subtraction model based on 3D convolutional neural networks (3D CNNs) which combines temporal and spatial information to effectively separate the foreground from all the sequences in an end-to-end manner. Different from conventional models, we view background subtraction as three-class classification problem, i.e., the foreground, the background and the boundary. This design can obtain more reasonable results than existing baseline models. Experiments on the Change Detection 2012 dataset verify the potential of our model in both quantity and quality.",
"title": ""
},
{
"docid": "799f9ca9ea641c1893e4900fdc29c8d4",
"text": "This paper presents a large scale general purpose image database with human annotated ground truth. Firstly, an all-in-all labeling framework is proposed to group visual knowledge of three levels: scene level (global geometric description), object level (segmentation, sketch representation, hierarchical decomposition), and low-mid level (2.1D layered representation, object boundary attributes, curve completion, etc.). Much of this data has not appeared in previous databases. In addition, And-Or Graph is used to organize visual elements to facilitate top-down labeling. An annotation tool is developed to realize and integrate all tasks. With this tool, we’ve been able to create a database consisting of more than 636,748 annotated images and video frames. Lastly, the data is organized into 13 common subsets to serve as benchmarks for diverse evaluation endeavors.",
"title": ""
},
{
"docid": "ea94a3c561476e88d5ac2640656a3f92",
"text": "Point cloud is a basic description of discrete shape information. Parameterization of unorganized points is important for shape analysis and shape reconstruction of natural objects. In this paper we present a new algorithm for global parameterization of an unorganized point cloud and its application to the meshing of the cloud. Our method is guided by principal directions so as to preserve the intrinsic geometric properties. After initial estimation of principal directions, we develop a kNN(k-nearest neighbor) graph-based method to get a smooth direction field. Then the point cloud is cut to be topologically equivalent to a disk. The global parameterization is computed and its gradients align well with the guided direction field. A mixed integer solver is used to guarantee a seamless parameterization across the cut lines. The resultant parameterization can be used to triangulate and quadrangulate the point cloud simultaneously in a fully automatic manner, where the shape of the data is of any genus. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9f9128951d6c842689f61fc19c79f238",
"text": "This paper concerns image reconstruction for helical x-ray transmission tomography (CT) with multi-row detectors. We introduce two approximate cone-beam (CB) filtered-backprojection (FBP) algorithms of the Feldkamp type, obtained by extending to three dimensions (3D) two recently proposed exact FBP algorithms for 2D fan-beam reconstruction. The new algorithms are similar to the standard Feldkamp-type FBP for helical CT. In particular, they can reconstruct each transaxial slice from data acquired along an arbitrary segment of helix, thereby efficiently exploiting the available data. In contrast to the standard Feldkamp-type algorithm, however, the redundancy weight is applied after filtering, allowing a more efficient numerical implementation. To partially alleviate the CB artefacts, which increase with increasing values of the helical pitch, a frequency-mixing method is proposed. This method reconstructs the high frequency components of the image using the longest possible segment of helix, whereas the low frequencies are reconstructed using a minimal, short-scan, segment of helix to minimize CB artefacts. The performance of the algorithms is illustrated using simulated data.",
"title": ""
},
{
"docid": "4829d8c0dd21f84c3afbe6e1249d6248",
"text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.",
"title": ""
},
{
"docid": "1027ce2c8e3a231fe8ab3f469a857f82",
"text": "There are two major challenges for a high-performance remote-sensing database. First, it must provide low-latency retrieval of very large volumes of spatio-temporal data. This requires effective declustering and placement of a multidimensional dataset onto a large disk farm. Second, the order of magnitude reduction in data-size due to postprocessing makes it imperative, from a performance perspective, that the postprocessing be done on the machine that holds the data. This requires careful coordination of computation and data retrieval. This paper describes the design, implementation and evaluation of Titan, a parallel shared-nothing database designed for handling remotesensing data. The computational platform for Titan is a 16-processor IBM SP-2 with four fast disks attached to each processor. Titan is currently operational and contains about 24 GB of AVHRR data from the NOAA-7 satellite. The experimental results show that Titan provides good performance for global queries and interactive response times for local queries.",
"title": ""
},
{
"docid": "a70d064af5e8c5842b8ca04abc3fb2d6",
"text": "In the current scenario of cloud computing, heterogeneous resources are located in various geographical locations requiring security-aware resource management to handle security threats. However, existing techniques are unable to protect systems from security attacks. To provide a secure cloud service, a security-based resource management technique is required that manages cloud resources automatically and delivers secure cloud services. In this paper, we propose a self-protection approach in cloud resource management called SECURE, which offers self-protection against security attacks and ensures continued availability of services to authorized users. The performance of SECURE has been evaluated using SNORT. The experimental results demonstrate that SECURE performs effectively in terms of both the intrusion detection rate and false positive rate. Further, the impact of security on quality of service (QoS) has been analyzed.",
"title": ""
},
{
"docid": "b50b912cb79368db51825e7cbea2df5d",
"text": "Effectively solving the problem of sketch generation, which aims to produce human-drawing-like sketches from real photographs, opens the door for many vision applications such as sketch-based image retrieval and nonphotorealistic rendering. In this paper, we approach automatic sketch generation from a human visual perception perspective. Instead of gathering insights from photographs, for the first time, we extract information from a large pool of human sketches. In particular, we study how multiple Gestalt rules can be encapsulated into a unified perceptual grouping framework for sketch generation. We further show that by solving the problem of Gestalt confliction, i.e., encoding the relative importance of each rule, more similar to human-made sketches can be generated. For that, we release a manually labeled sketch dataset of 96 object categories and 7,680 sketches. A novel evaluation framework is proposed to quantify human likeness of machinegenerated sketches by examining how well they can be classified using models trained from human data. Finally, we demonstrate the superiority of our sketches under the practical application of sketch-based image retrieval.",
"title": ""
},
{
"docid": "3432123018be278cb2e85892925ce4e6",
"text": "The cellular heterogeneity and complex tissue architecture of most tumor samples is a major obstacle in image analysis on standard hematoxylin and eosin-stained (H&E) tissue sections. A mixture of cancer and normal cells complicates the interpretation of their cytological profiles. Furthermore, spatial arrangement and architectural organization of cells are generally not reflected in cellular characteristics analysis. To address these challenges, first we describe an automatic nuclei segmentation of H&E tissue sections. In the task of deconvoluting cellular heterogeneity, we adopt Landmark based Spectral Clustering (LSC) to group individual nuclei in such a way that nuclei in the same group are more similar. We next devise spatial statistics for analyzing spatial arrangement and organization, which are not detectable by individual cellular characteristics. Our quantitative, spatial statistics analysis could benefit H&E section analysis by refining and complementing cellular characteristics analysis.",
"title": ""
},
{
"docid": "a70bf8d63587aaa719d3171155943153",
"text": "Population-based linkage analysis is a new method for analysing genomewide single nucleotide polymorphism (SNP) genotype data in case–control samples, which does not assume a common disease, common variant model. The genome is scanned for extended segments that show increased identity-by-descent sharing within case–case pairs, relative to case–control or control–control pairs. The method is robust to allelic heterogeneity and is suited to mapping genes which contain multiple, rare susceptibility variants of relatively high penetrance. We analysed genomewide SNP datasets for two schizophrenia case–control cohorts, collected in Aberdeen (461 cases, 459 controls) and Munich (429 cases, 428 controls). Population-based linkage testing must be performed within homogeneous samples and it was therefore necessary to analyse the cohorts separately. Each cohort was first subjected to several procedures to improve genetic homogeneity, including identity-by-state outlier detection and multidimensional scaling analysis. When testing only cases who reported a positive family history of major psychiatric disease, consistent with a model of strongly penetrant susceptibility alleles, we saw a distinct peak on chromosome 19q in both cohorts that appeared in meta-analysis (P=0.000016) to surpass the traditional level for genomewide significance for complex trait linkage. The linkage signal was also present in a third case–control sample for familial bipolar disorder, such that meta-analysing all three datasets together yielded a linkage P=0.0000026. A model of rare but highly penetrant disease alleles may be more applicable to some instances of major psychiatric diseases than the common disease common variant model, and we therefore suggest that other genome scan datasets are analysed with this new, complementary method.",
"title": ""
},
{
"docid": "e112af9e35690b64acc7242611b39dd2",
"text": "Body sensor network systems can help people by providing healthcare services such as medical monitoring, memory enhancement, medical data access, and communication with the healthcare provider in emergency situations through the SMS or GPRS [1,2]. Continuous health monitoring with wearable [3] or clothing-embedded transducers [4] and implantable body sensor networks [5] will increase detection of emergency conditions in at risk patients. Not only the patient, but also their families will benefit from these. Also, these systems provide useful methods to remotely acquire and monitor the physiological signals without the need of interruption of the patient’s normal life, thus improving life quality [6,7].",
"title": ""
},
{
"docid": "81e65e50b96a5b38fbbddeb6ad0acfe4",
"text": "Effectively using full syntactic parsing information in Neural Networks (NNs) to solve relational tasks, e.g., question similarity, is still an open problem. In this paper, we propose to inject structural representations in NNs by (i) learning an SVM model using Tree Kernels (TKs) on relatively few pairs of questions (few thousands) as gold standard (GS) training data is typically scarce, (ii) predicting labels on a very large corpus of question pairs, and (iii) pre-training NNs on such large corpus. The results on Quora and SemEval question similarity datasets show that NNs trained with our approach can learn more accurate models, especially after fine tuning on GS.",
"title": ""
},
{
"docid": "f87fea9cd76d1545c34f8e813347146e",
"text": "In fault detection and isolation, diagnostic test results are commonly used to compute a set of diagnoses, where each diagnosis points at a set of components which might behave abnormally. In distributed systems consisting of multiple control units, the test results in each unit can be used to compute local diagnoses while all test results in the complete system give the global diagnoses. It is an advantage for both repair and fault-tolerant control to have access to the global diagnoses in each unit since these diagnoses represent all test results in all units. However, when the diagnoses, for example, are to be used to repair a unit, only the components that are used by the unit are of interest. The reason for this is that it is only these components that could have caused the abnormal behavior. However, the global diagnoses might include components from the complete system and therefore often include components that are superfluous for the unit. Motivated by this observation, a new type of diagnosis is proposed, namely, the condensed diagnosis. Each unit has a unique set of condensed diagnoses which represents the global diagnoses. The benefit of the condensed diagnoses is that they only include components used by the unit while still representing the global diagnoses. The proposed method is applied to an automotive vehicle, and the results from the application study show the benefit of using condensed diagnoses compared to global diagnoses.",
"title": ""
},
{
"docid": "216efaca84a9df871e7919f0b5215b77",
"text": "This paper is concerned with some experimental results and practical evaluations of a three-level phase shifted ZVS-PWM DC-DC converter with neutral point clamping diodes and flying capacitor for a variety of gas metal arc welding machines. This new DC-DC converter suitable for high power applications is implemented by modifying the high-frequency-linked half bridge soft switching PWM DC-DC converter with two active edge resonant cells (AERCs) in high side and low side DC rails, which is previously-developed put into practice by the authors. The operating principle of the three-level phase-shift ZVS-PWM DC-DC converter and its experimental and simulation results including power regulation characteristics vs. phase-shifted angle and power conversion efficiency characteristics in addition to power loss analysis are illustrated and evaluated comparatively from a practical point of view, along with the remarkable advantageous features as compared with previously-developed one.",
"title": ""
},
{
"docid": "827e9045f932b146a8af66224e114be6",
"text": "Using a common set of attributes to determine which methodology to use in a particular data warehousing project.",
"title": ""
},
{
"docid": "66fc8b47dd186fa17240ee64aadf7ca7",
"text": "Posterior reversible encephalopathy syndrome (PRES) is characterized by variable associations of seizure activity, consciousness impairment, headaches, visual abnormalities, nausea/vomiting, and focal neurological signs. The PRES may occur in diverse situations. The findings on neuroimaging in PRES are often symmetric and predominate edema in the white matter of the brain areas perfused by the posterior brain circulation, which is reversible when the underlying cause is treated. We report the case of PRES in normotensive patient with hyponatremia.",
"title": ""
},
{
"docid": "d449a4d183c2a3e1905935f624d684d3",
"text": "This paper introduces the approach CBRDIA (Case-based Reasoning for Document Invoice Analysis) which uses the principles of case-based reasoning to analyze, recognize and interpret invoices. Two CBR cycles are performed sequentially in CBRDIA. The first one consists in checking whether a similar document has already been processed, which makes the interpretation of the current one easy. The second cycle works if the first one fails. It processes the document by analyzing and interpreting its structuring elements (adresses, amounts, tables, etc) one by one. The CBR cycles allow processing documents from both knonwn or unknown classes. Applied on 923 invoices, CBRDIA reaches a recognition rate of 85,22% for documents of known classes and 74,90% for documents of unknown classes.",
"title": ""
},
{
"docid": "741078742178d09f911ef9633befeb9b",
"text": "We introduce a novel kernel for comparing two text documents. The kernel is an inner product in the feature space consisting of all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences which are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. A preliminary experimental comparison of the performance of the kernel compared with a standard word feature space kernel [4] is made showing encouraging results.",
"title": ""
}
] | scidocsrr |
dceaf350cf920d3bf08543d959cbd77d | Characterizing Online Rumoring Behavior Using Multi-Dimensional Signatures | [
{
"docid": "206263868f70a1ce6aa734019d215a03",
"text": "This paper examines microblogging information diffusion activity during the 2011 Egyptian political uprisings. Specifically, we examine the use of the retweet mechanism on Twitter, using empirical evidence of information propagation to reveal aspects of work that the crowd conducts. Analysis of the widespread contagion of a popular meme reveals interaction between those who were \"on the ground\" in Cairo and those who were not. However, differences between information that appeals to the larger crowd and those who were doing on-the-ground work reveal important interplay between the two realms. Through both qualitative and statistical description, we show how the crowd expresses solidarity and does the work of information processing through recommendation and filtering. We discuss how these aspects of work mutually sustain crowd interaction in a politically sensitive context. In addition, we show how features of this retweet-recommendation behavior could be used in combination with other indicators to identify information that is new and likely coming from the ground.",
"title": ""
},
{
"docid": "7057f72a1ce2e92ae01785d5b6e4a1d5",
"text": "Social transmission is everywhere. Friends talk about restaurants , policy wonks rant about legislation, analysts trade stock tips, neighbors gossip, and teens chitchat. Further, such interpersonal communication affects everything from decision making and well-But although it is clear that social transmission is both frequent and important, what drives people to share, and why are some stories and information shared more than others? Traditionally, researchers have argued that rumors spread in the \" 3 Cs \" —times of conflict, crisis, and catastrophe (e.g., wars or natural disasters; Koenig, 1985)―and the major explanation for this phenomenon has been generalized anxiety (i.e., apprehension about negative outcomes). Such theories can explain why rumors flourish in times of panic, but they are less useful in explaining the prevalence of rumors in positive situations, such as the Cannes Film Festival or the dot-com boom. Further, although recent work on the social sharing of emotion suggests that positive emotion may also increase transmission, why emotions drive sharing and why some emotions boost sharing more than others remains unclear. I suggest that transmission is driven in part by arousal. Physiological arousal is characterized by activation of the autonomic nervous system (Heilman, 1997), and the mobilization provided by this excitatory state may boost sharing. This hypothesis not only suggests why content that evokes more of certain emotions (e.g., disgust) may be shared more than other a review), but also suggests a more precise prediction , namely, that emotions characterized by high arousal, such as anxiety or amusement (Gross & Levenson, 1995), will boost sharing more than emotions characterized by low arousal, such as sadness or contentment. This idea was tested in two experiments. They examined how manipulations that increase general arousal (i.e., watching emotional videos or jogging in place) affect the social transmission of unrelated content (e.g., a neutral news article). If arousal increases transmission, even incidental arousal (i.e., outside the focal content being shared) should spill over and boost sharing. In the first experiment, 93 students completed what they were told were two unrelated studies. The first evoked specific emotions by using film clips validated in prior research (Christie & Friedman, 2004; Gross & Levenson, 1995). Participants in the control condition watched a neutral clip; those in the experimental conditions watched an emotional clip. Emotional arousal and valence were manipulated independently so that high-and low-arousal emotions of both a positive (amusement vs. contentment) and a negative (anxiety vs. …",
"title": ""
},
{
"docid": "7641f8f3ed2afd0c16665b44c1216e79",
"text": "In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.",
"title": ""
}
] | [
{
"docid": "17cd41a64a845ba400ee5018eb899d15",
"text": "Structured prediction requires searching over a combinatorial number of structures. To tackle it, we introduce SparseMAP: a new method for sparse structured inference, and its natural loss function. SparseMAP automatically selects only a few global structures: it is situated between MAP inference, which picks a single structure, and marginal inference, which assigns nonzero probability to all structures, including implausible ones. SparseMAP can be computed using only calls to a MAP oracle, making it applicable to problems with intractable marginal inference, e.g., linear assignment. Sparsity makes gradient backpropagation efficient regardless of the structure, enabling us to augment deep neural networks with generic and sparse structured hidden layers. Experiments in dependency parsing and natural language inference reveal competitive accuracy, improved interpretability, and the ability to capture natural language ambiguities, which is attractive for pipeline systems.",
"title": ""
},
{
"docid": "7e99c34beafefdfcf11750e5acfc8ac0",
"text": "Emerging technologies offer exciting new ways of using entertainment technology to create fantastic play experiences and foster interactions between players. Evaluating entertainment technology is challenging because success isn’ t defined in terms of productivity and performance, but in terms of enjoyment and interaction. Current subjective methods of evaluating entertainment technology aren’ t sufficiently robust. This paper describes two experiments designed to test the efficacy of physiological measures as evaluators of user experience with entertainment technologies. We found evidence that there is a different physiological response in the body when playing against a computer versus playing against a friend. These physiological results are mirrored in the subjective reports provided by the participants. In addition, we provide guidelines for collecting physiological data for user experience analysis, which were informed by our empirical investigations. This research provides an initial step towards using physiological responses to objectively evaluate a user’s experience with entertainment technology.",
"title": ""
},
{
"docid": "1fc965670f71d9870a4eea93d129e285",
"text": "The present study investigates the impact of the experience of role playing a violent character in a video game on attitudes towards violent crimes and criminals. People who played the violent game were found to be more acceptable of crimes and criminals compared to people who did not play the violent game. More importantly, interaction effects were found such that people were more acceptable of crimes and criminals outside the game if the criminals were matched with the role they played in the game and the criminal actions were similar to the activities they perpetrated during the game. The results indicate that people’s virtual experience through role-playing games can influence their attitudes and judgments of similar real-life crimes, especially if the crimes are similar to what they conducted while playing games. Theoretical and practical implications are discussed. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e4861d48d54e0c48f241b5adb1a893e6",
"text": "With the rapid development of the World Wide Web, electronic word-of-mouth interaction has made consumers active participants. Nowadays, a large number of reviews posted by the consumers on the Web provide valuable information to other consumers. Such information is highly essential for decision making and hence popular among the internet users. This information is very valuable not only for prospective consumers to make decisions but also for businesses in predicting the success and sustainability. In this paper, a Gini Index based feature selection method with Support Vector Machine (SVM) classifier is proposed for sentiment classification for large movie review data set. The results show that our Gini Index method has better classification performance in terms of reduced error rate and accuracy.",
"title": ""
},
{
"docid": "cd35c6e2763b634d23de1903a3261c59",
"text": "We investigate the Belousov-Zhabotinsky (BZ) reaction in an attempt to establish a basis for computation using chemical oscillators coupled via inhibition. The system consists of BZ droplets suspended in oil. Interdrop coupling is governed by the non-polar communicator of inhibition, Br2. We consider a linear arrangement of three droplets to be a NOR gate, where the center droplet is the output and the other two are inputs. Oxidation spikes in the inputs, which we define to be TRUE, cause a delay in the next spike of the output, which we read to be FALSE. Conversely, when the inputs do not spike (FALSE) there is no delay in the output (TRUE), thus producing the behavior of a NOR gate. We are able to reliably produce NOR gates with this behavior in microfluidic experiment.",
"title": ""
},
{
"docid": "bb4001c4cb5fde8d34fd48ee50eb053c",
"text": "We consider the problem of identifying the causal direction between two discrete random variables using observational data. Unlike previous work, we keep the most general functional model but make an assumption on the unobserved exogenous variable: Inspired by Occam’s razor, we assume that the exogenous variable is simple in the true causal direction. We quantify simplicity using Rényi entropy. Our main result is that, under natural assumptions, if the exogenous variable has lowH0 entropy (cardinality) in the true direction, it must have high H0 entropy in the wrong direction. We establish several algorithmic hardness results about estimating the minimum entropy exogenous variable. We show that the problem of finding the exogenous variable with minimum H1 entropy (Shannon Entropy) is equivalent to the problem of finding minimum joint entropy given n marginal distributions, also known as minimum entropy coupling problem. We propose an efficient greedy algorithm for the minimum entropy coupling problem, that for n = 2 provably finds a local optimum. This gives a greedy algorithm for finding the exogenous variable with minimum Shannon entropy. Our greedy entropy-based causal inference algorithm has similar performance to the state of the art additive noise models in real datasets. One advantage of our approach is that we make no use of the values of random variables but only their distributions. Our method can therefore be used for causal inference for both ordinal and also categorical data, unlike additive noise models.",
"title": ""
},
{
"docid": "aeccfd7c8f77e58375890a7c2bb87de7",
"text": "The aim of this paper is to determine acceptance of internet banking systems among potential young users, specifically future marketers, who significantly affect the continuous usage of internet banking service. It attempted to examine the impact of Computer Self Efficacy (CSE) and extended Technology Acceptance Model (TAM) on the behavioral intention (BI) to use the internet banking systems. Measure of CSE was based on the Self Service Technology, as proposed by Compaeu and Higgins. A technology acceptance model for internet banking system was developed based on the modified version of TAM to examine the effects of Perceived Usefulness (PU), Perceived Ease of use (PE) and Perceived Credibility (PC) of extended TAM on the BI to use the internet banking systems. PU and PE are the established dimensions of classical TAM and Perceived credibility (PC) is the additional dimension to be included in the conceptual model of this study. Data were obtained from 222 undergraduates marketing students in a Malaysia’s public university. The finding indicated that CSE, PU, PE and PC of extended TAM were determinants of users’ acceptance of internet banking systems. PU, PE and PC were significantly affected BI, and respondents’ perceived credibility of the internet banking system had the strongest impact on their intention to use the system. This research validated that PU, PE and PC of the extended TAM were good predictors in understanding individual responses to information technology systems. The result of this study highlighted that issues of privacy and security of PC are important in the study of information systems acceptance, suggesting that internet banking providers need to address these issues effectively to convince potential users to use internet banking service. This study also validated the critical role of CSE in predicting individual responses to information technology systems. The finding unveiled that indirect relationship existed between CSE and BI through PU, PE and PC of TAM.",
"title": ""
},
{
"docid": "aa65dc18169238ef973ef24efb03f918",
"text": "A number of national studies point to a trend in which highly selective and elite private and public universities are becoming less accessible to lower-income students. At the same time there have been surprisingly few studies of the actual characteristics and academic experiences of low-income students or comparisons of their undergraduate experience with those of more wealthy students. This paper explores the divide between poor and rich students, first comparing a group of selective US institutions and their number and percentage of Pell Grant recipients and then, using institutional data and results from the University of California Undergraduate Experience Survey (UCUES), presenting an analysis of the high percentage of low-income undergraduate students within the University of California system — who they are, their academic performance and quality of their undergraduate experience. Among our conclusions: The University of California has a strikingly higher number of lowincome students when compared to a sample group of twenty-four other selective public and private universities and colleges, including the Ivy Leagues and a sub-group of other California institutions such as Stanford and the University of Southern California. Indeed, the UC campuses of Berkeley, Davis, and UCLA each have more Pell Grant students than all of the eight Ivy League institutions combined. However, one out of three Pell Grant recipients at UC have at least one parent with a four-year college degree, calling into question the assumption that “low-income” and “first-generation” are interchangeable groups of students. Low-income students, and in particular Pell Grant recipients, at UC have only slightly lower GPAs than their more wealthy counterparts in both math, science and engineering, and in humanities and social science fields. Contrary to some previous research, we find that low-income students have generally the same academic and social satisfaction levels; and are similar in their sense of belonging within their campus communities. However, there are some intriguing results across UC campuses, with low-income students somewhat less satisfied at those campuses where there are more affluent student bodies and where lower-income students have a smaller presence. An imbalance between rich and poor is the oldest and most fatal ailment of all republics — Plutarch There has been a growing and renewed concern among scholars of higher education and policymakers about increasing socioeconomic disparities in American society. Not surprisingly, these disparities are increasingly reflected * The SERU Project is a collaborative study based at the Center for Studies in Higher Education at UC Berkeley and focused on developing new types of data and innovative policy relevant scholarly analyses on the academic and civic experience of students at major research universities. For further information on the project, see http://cshe.berkeley.edu/research/seru/ ** John Aubrey Douglass is Senior Research Fellow – Public Policy and Higher Education at the Center for Studies in Higher Education at UC Berkeley and coPI of the SERU Project; Gregg Thomson is Director of the Office of Student Research at UC Berkeley and a co-PI of the SERU Project. We would like to thank David Radwin at OSR and a SERU Project Research Associate for his collaboration with data analysis. Douglass and Thomson: Poor and Rich 2 CSHE Research & Occasional Paper Series in the enrollment of students in the nation’s cadre of highly selective, elite private universities, and increasingly among public universities. Particularly over the past three decades, “brand name” prestige private universities and colleges have moved to a high tuition fee and high financial aid model, with the concept that a significant portion of generated tuition revenue can be redirected toward financial aid for either low-income or merit-based scholarships. With rising costs, declining subsidization by state governments, and the shift of federal financial aid toward loans versus grants in aid, public universities are moving a low fee model toward what is best called a moderate fee and high financial aid model – a model that is essentially evolving. There is increasing evidence, however, that neither the private nor the evolving public tuition and financial aid model is working. Students from wealthy families congregate at the most prestigious private and public institutions, with significant variance depending on the state and region of the nation, reflecting the quality and composition of state systems of higher education. A 2004 study by Sandy Astin and Leticia Oseguera looked at a number of selective private and public universities and concluded that the number and percentage of low-income and middle-income families had declined while the number from wealthy families increased. “American higher education, in other words, is more socioeconomically stratified today than at any other time during the past three decades,” they note. One reason, they speculated, may be “the increasing competitiveness among prospective college students for admission to the country’s most selective colleges and universities” (Astin and Oseguera 2004). A more recent study by Danette Gerald and Kati Haycock (2006) looked at the socioeconomic status (SES) of undergraduate students at a selective group of fifty “premier” public universities and had a similar conclusion – but one more alarming because of the important historical mission of public universities to provide broad access, a formal mandate or social contract. Though more open to students from low-income families than their private counterparts, the premier publics had declined in the percentage of students with federally funded Pell Grants (federal grants to students generally with family incomes below $40,000 annually) when compared to other four-year public institutions in the nation. Ranging from $431 to a maximum of $4,731, Pell Grants, and the criteria for selection of recipients, has long served as a benchmark on SES access. Pell Grant students have, on average, a family income of only $19,300. On average, note Gerald and Haycock, the selected premier publics have some 22% of their enrolled undergraduates with Pell Grants; all public four-year institutions have some 31% with Pell Grants; private institutions have an average of around 14% (Gerald and Haycock 2006). But it is important to note that there are a great many dimensions in understanding equity and access among private and public higher education institutions (HEIs). For one, there is a need to disaggregate types of institutions, for example, private versus public, university versus community college. Public and private institutions, and particularly highly selective universities and colleges, tend to draw from different demographic pools, with public universities largely linked to the socioeconomic stratification of their home state. Second, there are the factors related to rising tuition and increasingly complicated and, one might argue, inadequate approaches to financial aid in the U.S. With the slow down in the US economy, the US Department of Education recently estimated that demand for Pell Grants was exceeded projected demand by some 800,000 students; total applications for the grant program are up 16 percent over the previous year. This will require an additional $6 billion to the Pell Grant’s current budget of $14 billion next year.1 Economic downturns tend to push demand up for access to higher education among the middle and lower class, although most profoundly at the community college level. This phenomenon plus continued growth in the nation’s population, and in particularly in states such as California, Texas and Florida, means an inadequate financial aid system, where the maximum Pell Grant award has remained largely the same for the last decade when adjusted for inflation, will be further eroded. But in light of the uncertainty in the economy and the lack of resolve at the federal level to support higher education, it is not clear the US government will fund the increased demand – it may cut the maximum award. And third, there are larger social trends, such as increased disparities in income and the erosion of public services, declines in the quality of many public schools, the stagnation and real declines for some socioeconomic groups in high school graduation rates; and the large increase in the number of part-time students, most of whom must work to stay financially solvent. Douglass and Thomson: Poor and Rich 3 CSHE Research & Occasional Paper Series This paper examines low-income, and upper income, student access to the University of California and how lowincome access compares with a group of elite privates (specifically Ivy League institutions) and selective publics. Using data from the University of California’s Undergraduate Experience Survey (UCUES) and institutional data, we discuss what makes UC similar and different in the SES and demographic mix of students. Because the maximum Pell Grant is under $5,000, the cost of tuition alone is higher in the publics, and much higher in our group of selective privates, the percentage and number of Pell Grant students at an institution provides evidence of its resolve, creativity, and financial commitment to admit and enroll working and middle-class students. We then analyze the undergraduate experience of our designation of poor students (defined for this analysis as Pell Grant recipients) and rich students (from high-income families, defined as those with household incomes above $125,000 and no need-based aid).2 While including other income groups, we use these contrasting categories of wealth to observe differences in the background of students, their choice of major, general levels of satisfaction, academic performance, and sense of belonging at the university. There is very little analytical work on the characteristics and percepti",
"title": ""
},
{
"docid": "28f8be68a0fe4762af272a0e11d53f7d",
"text": "In this article, we address the cross-domain (i.e., street and shop) clothing retrieval problem and investigate its real-world applications for online clothing shopping. It is a challenging problem due to the large discrepancy between street and shop domain images. We focus on learning an effective feature-embedding model to generate robust and discriminative feature representation across domains. Existing triplet embedding models achieve promising results by finding an embedding metric in which the distance between negative pairs is larger than the distance between positive pairs plus a margin. However, existing methods do not address the challenges in the cross-domain clothing retrieval scenario sufficiently. First, the intradomain and cross-domain data relationships need to be considered simultaneously. Second, the number of matched and nonmatched cross-domain pairs are unbalanced. To address these challenges, we propose a deep cross-triplet embedding algorithm together with a cross-triplet sampling strategy. The extensive experimental evaluations demonstrate the effectiveness of the proposed algorithms well. Furthermore, we investigate two novel online shopping applications, clothing trying on and accessories recommendation, based on a unified cross-domain clothing retrieval framework.",
"title": ""
},
{
"docid": "380628703d9200e859880fee2b89cd27",
"text": "Current wet chemical methods for biomass composition analysis using two-step sulfuric acid hydrolysis are time-consuming, labor-intensive, and unable to provide structural information about biomass. Infrared techniques provide fast, low-cost analysis, are non-destructive, and have shown promising results. Chemometric analysis has allowed researchers to perform qualitative and quantitative study of biomass with both near-infrared and mid-infrared spectroscopy. This review summarizes the progress and applications of infrared techniques in biomass study, and compares the infrared and the wet chemical methods for composition analysis. In addition to reviewing recent studies of biomass structure and composition, we also discuss the progress and prospects for the applications of infrared techniques. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ce167e13e5f129059f59c8e54b994fd4",
"text": "Critical research has emerged as a potentially important stream in information systems research, yet the nature and methods of critical research are still in need of clarification. While criteria or principles for evaluating positivist and interpretive research have been widely discussed, criteria or principles for evaluating critical social research are lacking. Therefore, the purpose of this paper is to propose a set of principles for the conduct of critical research. This paper has been accepted for publication in MIS Quarterly and follows on from an earlier piece that suggested a set of principles for interpretive research (Klein and Myers, 1999). The co-author of this paper is Heinz Klein.",
"title": ""
},
{
"docid": "bb380a89fe43b570a55b6b986dfdb5f8",
"text": "The overhead throwing athlete is an extremely challenging patient in sports medicine. The repetitive microtraumatic stresses imposed on the athlete's shoulder joint complex during the throwing motion constantly place the athlete at risk for injury. Treatment of the overhead athlete requires the understanding of several principles based on the unique physical characteristics of the overhead athlete and the demands endured during the act of throwing. These principles are described and incorporated in a multiphase progressive rehabilitation program designed to prevent injuries and rehabilitate the injured athlete, both nonoperatively and postoperatively.",
"title": ""
},
{
"docid": "77bbeb9510f4c9000291910bf06e4a22",
"text": "Traveling Salesman Problem is an important optimization issue of many fields such as transportation, logistics and semiconductor industries and it is about finding a Hamiltonian path with minimum cost. To solve this problem, many researchers have proposed different approaches including metaheuristic methods. Artificial Bee Colony algorithm is a well known swarm based optimization technique. In this paper we propose a new Artificial Bee Colony algorithm called Combinatorial ABC for Traveling Salesman Problem. Simulation results show that this Artificial Bee Colony algorithm can be used for combinatorial optimization problems.",
"title": ""
},
{
"docid": "373daff94b0867437e2211f460437a19",
"text": "We live in an increasingly connected and automated society. Smart environments embody this trend by linking computers to everyday tasks and settings. Important features of such environments are that they possess a degree of autonomy, adapt themselves to changing conditions, and communicate with humans in a natural way. These systems can be found in offices, airports, hospitals, classrooms, or any other environment. This article discusses automation of our most personal environment: the home. There are several characteristics that are commonly found in smart homes. This type of environment assumes controls and coordinates a network of sensors and devices, relieving the inhabitants of this burden. Interaction with smart homes is in a form that is comfortable to people: speech, gestures, and actions take the place of windows, icons, menus, and pointers. We define a smart home as one that is able to acquire and apply knowledge about its inhabitants and their surroundings in order to adapt to the inhabitants and meet the goals of comfort and efficiency. Designing and implementing smart homes requires a unique breadth of knowledge not limited to a single discipline, but integrates aspects of machine learning, decision making, human-machine interfaces, wireless networking, mobile communications, databases, sensor networks, and pervasive computing. With these capabilities, the home can control many aspects of the environment such as climate, lighting, maintenance, and entertainment. Intelligent automation of these activities can reduce the amount of interaction required by inhabitants and reduce energy consumption and other potential operating costs. The same capabilities can be",
"title": ""
},
{
"docid": "cb3f1598c2769b373a20b4dddd8b35ea",
"text": "An image hash should be (1) robust to allowable operations and (2) sensitive to illegal manipulations and distinct queries. Some applications also require the hash to be able to localize image tampering. This requires the hash to contain both robust content and alignment information to meet the above criterion. Fulfilling this is difficult because of two contradictory requirements. First, the hash should be small and second, to verify authenticity and then localize tampering, the amount of information in the hash about the original required would be large. Hence a tradeoff between these requirements needs to be found. This paper presents an image hashing method that addresses this concern, to not only detect but also localize tampering using a small signature (< 1kB). Illustrative experiments bring out the efficacy of the proposed method compared to existing methods.",
"title": ""
},
{
"docid": "7b8c56a03653509c729b37e1ce4d33fc",
"text": "Systems for declarative large-scale machine learning (ML) algorithms aim at high-level algorithm specification and automatic optimization of runtime execution plans. State-ofthe-art compilers rely on algebraic rewrites and operator selection, including fused operators to avoid materialized intermediates, reduce memory bandwidth requirements, and exploit sparsity across chains of operations. However, the unlimited number of relevant patterns for rewrites and operators poses challenges in terms of development effort and high performance impact. Query compilation has been studied extensively in the database literature, but ML programs additionally require handling linear algebra and exploiting algebraic properties, DAG structures, and sparsity. In this paper, we introduce Spoof, an architecture to automatically (1) identify algebraic simplification rewrites, and (2) generate fused operators in a holistic framework. We describe a snapshot of the overall system, including key techniques of sum-product optimization and code generation. Preliminary experiments show performance close to hand-coded fused operators, significant improvements over a baseline without fused operators, and moderate compilation overhead.",
"title": ""
},
{
"docid": "34ba1323c4975a566f53e2873231e6ad",
"text": "This paper describes the motivation, the realization, and the experience of incorporating simulation and hardware implementation into teaching computer organization and architecture to computer science students. It demonstrates that learning by doing has helped students to truly understand how a computer is constructed and how it really works in practice. Correlated with textbook material, a set of simulation and implementation projects were created on the basis of the work that students had done in previous homework and laboratory activities. Students can thus use these designs as building blocks for completing more complex projects at a later time. The projects cover a wide range of topics from simple adders up to ALU's and CPU's. These processors operate in a virtual manner on certain short assembly-language programs. Specifically, this paper shares the experience of using simulation tools (Alterareg Quartus II) and reconfigurable hardware prototyping platforms (Alterareg UP2 development boards)",
"title": ""
},
{
"docid": "ec5ade0dd3aee92102934de27beb6b4f",
"text": "This paper covers the whole process of developing an Augmented Reality Stereoscopig Render Engine for the Oculus Rift. To capture the real world in form of a camera stream, two cameras with fish-eye lenses had to be installed on the Oculus Rift DK1 hardware. The idea was inspired by Steptoe [1]. After the introduction, a theoretical part covers all the most neccessary elements to achieve an AR System for the Oculus Rift, following the implementation part where the code from the AR Stereo Engine is explained in more detail. A short conclusion section shows some results, reflects some experiences and in the final chapter some future works will be discussed. The project can be accessed via the git repository https: // github. com/ MaXvanHeLL/ ARift. git .",
"title": ""
},
{
"docid": "d91cb15eb4581c44c2f9f9a4ba67dfd1",
"text": "BACKGROUND\nbeta-Blockade-induced benefit in heart failure (HF) could be related to baseline heart rate and treatment-induced heart rate reduction, but no such relationships have been demonstrated.\n\n\nMETHODS AND RESULTS\nIn CIBIS II, we studied the relationships between baseline heart rate (BHR), heart rate changes at 2 months (HRC), nature of cardiac rhythm (sinus rhythm or atrial fibrillation), and outcomes (mortality and hospitalization for HF). Multivariate analysis of CIBIS II showed that in addition to beta-blocker treatment, BHR and HRC were both significantly related to survival and hospitalization for worsening HF, the lowest BHR and the greatest HRC being associated with best survival and reduction of hospital admissions. No interaction between the 3 variables was observed, meaning that on one hand, HRC-related improvement in survival was similar at all levels of BHR, and on the other hand, bisoprolol-induced benefit over placebo for survival was observed to a similar extent at any level of both BHR and HRC. Bisoprolol reduced mortality in patients with sinus rhythm (relative risk 0.58, P:<0.001) but not in patients with atrial fibrillation (relative risk 1.16, P:=NS). A similar result was observed for cardiovascular mortality and hospitalization for HF worsening.\n\n\nCONCLUSIONS\nBHR and HRC are significantly related to prognosis in heart failure. beta-Blockade with bisoprolol further improves survival at any level of BHR and HRC and to a similar extent. The benefit of bisoprolol is questionable, however, in patients with atrial fibrillation.",
"title": ""
}
] | scidocsrr |
1152ca1e52211fee8c089a8119edc5e5 | Charge equalization converter with parallel primary winding for series connected Lithium-Ion battery strings in HEV | [
{
"docid": "90c3543eca7a689188725e610e106ce9",
"text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.",
"title": ""
}
] | [
{
"docid": "690f65505dd936f834a3bb8042147564",
"text": "Forging new memories for facts and events, holding critical details in mind on a moment-to-moment basis, and retrieving knowledge in the service of current goals all depend on a complex interplay between neural ensembles throughout the brain. Over the past decade, researchers have increasingly utilized powerful analytical tools (e.g., multivoxel pattern analysis) to decode the information represented within distributed functional magnetic resonance imaging activity patterns. In this review, we discuss how these methods can sensitively index neural representations of perceptual and semantic content and how leverage on the engagement of distributed representations provides unique insights into distinct aspects of memory-guided behavior. We emphasize that, in addition to characterizing the contents of memories, analyses of distributed patterns shed light on the processes that influence how information is encoded, maintained, or retrieved, and thus inform memory theory. We conclude by highlighting open questions about memory that can be addressed through distributed pattern analyses.",
"title": ""
},
{
"docid": "ca0f2b3565b6479c5c3b883325bf3296",
"text": "We present a simple, robust generation system which performs content selection and surface realization in a unified, domain-independent framework. In our approach, we break up the end-to-end generation process into a sequence of local decisions, arranged hierarchically and each trained discriminatively. We deployed our system in three different domains—Robocup sportscasting, technical weather forecasts, and common weather forecasts, obtaining results comparable to state-ofthe-art domain-specific systems both in terms of BLEU scores and human evaluation.",
"title": ""
},
{
"docid": "df0be45b6db0de70acb6bbf44e7898aa",
"text": "The paper focuses on conservation agriculture (CA), defined as minimal soil disturbance (no-till, NT) and permanent soil cover (mulch) combined with rotations, as a more sustainable cultivation system for the future. Cultivation and tillage play an important role in agriculture. The benefits of tillage in agriculture are explored before introducing conservation tillage (CT), a practice that was borne out of the American dust bowl of the 1930s. The paper then describes the benefits of CA, a suggested improvement on CT, where NT, mulch and rotations significantly improve soil properties and other biotic factors. The paper concludes that CA is a more sustainable and environmentally friendly management system for cultivating crops. Case studies from the rice-wheat areas of the Indo-Gangetic Plains of South Asia and the irrigated maize-wheat systems of Northwest Mexico are used to describe how CA practices have been used in these two environments to raise production sustainably and profitably. Benefits in terms of greenhouse gas emissions and their effect on global warming are also discussed. The paper concludes that agriculture in the next decade will have to sustainably produce more food from less land through more efficient use of natural resources and with minimal impact on the environment in order to meet growing population demands. Promoting and adopting CA management systems can help meet this goal.",
"title": ""
},
{
"docid": "ccaa01441d7de9009dea10951a3ea2f3",
"text": "for Natural Language A First Course in Computational Semanti s Volume II Working with Dis ourse Representation Stru tures Patri k Bla kburn & Johan Bos September 3, 1999",
"title": ""
},
{
"docid": "5ca14c0581484f5618dd806a6f994a03",
"text": "Many of existing criteria for evaluating Web sites quality require methods such as heuristic evaluations, or/and empirical usability tests. This paper aims at defining a quality model and a set of characteristics relating internal and external quality factors and giving clues about potential problems, which can be measured by automated tools. The first step in the quality assessment process is an automatic check of the source code, followed by manual evaluation, possibly supported by an appropriate user panel. As many existing tools can check sites (mainly considering accessibility issues), the general architecture will be based upon a conceptual model of the site/page, and the tools will export their output to a Quality Data Base, which is the basis for subsequent actions (checking, reporting test results, etc.).",
"title": ""
},
{
"docid": "738f60fbfe177eec52057c8e5ab43e55",
"text": "From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains.",
"title": ""
},
{
"docid": "25a1ff583d944075593615777ec4c3be",
"text": "Diagnostic blood samples collected by phlebotomy are the most common type of biological specimens drawn and sent to laboratory medicine facilities for being analyzed, thus supporting caring physicians in patient diagnosis, follow-up and/or therapeutic monitoring. Phlebotomy, a relatively invasive medical procedure, is indeed critical for the downstream procedures accomplished either in the analytical phase made in the laboratory or in the interpretive process done by the physicians. Diagnosis, management, treatment of patients and ultimately patient safety itself can be compromised by poor phlebotomy quality. We have read with interest a recent article where the authors addressed important aspects of venous blood collection for laboratory medicine analysis. The authors conducted a phlebotomy survey based on the Clinical and Laboratory Standard Institute (CLSI) H03-A6 document (presently replaced by the GP41-A6 document) in three government hospitals in Ethiopia to evaluate 120 professionals (101 non-laboratory professionals vs. 19 laboratory professionals) as regards the venous blood collection practice. The aim of this mini (non-systematic) review is to both take a cue from the above article and from current practices we had already observed in other laboratory settings, and discuss four questionable activities performed by health care professionals during venous blood collection. We refer to: i) diet restriction assessment; ii) puncture site cleansing; iii) timing of tourniquet removal and; iv) mixing specimen with additives.",
"title": ""
},
{
"docid": "98388ecea031b70916cabda20edf3496",
"text": "Rim-driven thrusters have received much attention concerning the potential benefits in vibration and hydrodynamic characteristics, which are of great importance in marine transportation systems. In this sense, the rim-driven permanent magnet, brushless dc, and induction motors have been recently suggested to be employed as marine propulsion motors. On the other hand, high-temperature superconducting (HTS) synchronous motors are becoming much fascinating, particularly in transport applications, regarding some considerable advantages such as low loss, high efficiency, and compactness. However, the HTS-type rim-driven synchronous motor has not been studied yet. Therefore, this paper is devoted to a design practice of rim-driven synchronous motors with HTS field winding. A detailed design procedure is developed for the HTS rim-driven motors, and the design algorithm is validated applying the finite element (FE) method. The FE model of a three-phase 2.5-MW HTS rim-driven synchronous motor is utilized, and the electromagnetic characteristics of the motor are then evaluated. The goal is to design an HTS machine fitted in a thin duct to minimize the hydrodynamic drag force. The design problem exhibits some difficulties while considering various constraints.",
"title": ""
},
{
"docid": "41f5e010cf81fd0152a806853f4d7e93",
"text": "Consider revising medium temperature used LM35 temperature sensor, what is an economic and feasible method. This study mainly researches the applicability of LM35 temperature sensor in soil temperature testing field. Selected the sensor, and based on the theoretical equation between the sensor output voltage and Celsius temperature; introduced correction coefficient, carried through the calibration experiment of the sensor; further more, it is applied to the potted rice's soil temperature detection. The calibration results show that, each sensor correction coefficient is different from others, but these numerical are close to 1, the linear relationship was very significant between tested medium temperature and sensor output voltage. In the key trial period of rice potted, used LM35DZ type temperature sensor to measure the soil temperature. The analysis result show that, the changing trends are basically equal both soil temperature and air temperature, and the characteristics of soil temperatures are lag. The variance analysis shows that, the difference was not significant paper film covered and without covered on soil temperature.",
"title": ""
},
{
"docid": "ae70b9ef5eeb6316b5b022662191cc4f",
"text": "The total harmonic distortion (THD) is an important performance criterion for almost any communication device. In most cases, the THD of a periodic signal, which has been processed in some way, is either measured directly or roughly estimated numerically, while analytic methods are employed only in a limited number of simple cases. However, the knowledge of the theoretical THD may be quite important for the conception and design of the communication equipment (e.g. transmitters, power amplifiers). The aim of this paper is to present a general theoretic approach, which permits to obtain an analytic closed-form expression for the THD. It is also shown that in some cases, an approximate analytic method, having good precision and being less sophisticated, may be developed. Finally, the mathematical technique, on which the proposed method is based, is described in the appendix.",
"title": ""
},
{
"docid": "1419e2f53412b4ce2d6944bad163f13d",
"text": "Determining the emotion of a song that best characterizes the affective content of the song is a challenging issue due to the difficulty of collecting reliable ground truth data and the semantic gap between human's perception and the music signal of the song. To address this issue, we represent an emotion as a point in the Cartesian space with valence and arousal as the dimensions and determine the coordinates of a song by the relative emotion of the song with respect to other songs. We also develop an RBF-ListNet algorithm to optimize the ranking-based objective function of our approach. The cognitive load of annotation, the accuracy of emotion recognition, and the subjective quality of the proposed approach are extensively evaluated. Experimental results show that this ranking-based approach simplifies emotion annotation and enhances the reliability of the ground truth. The performance of our algorithm for valence recognition reaches 0.326 in Gamma statistic.",
"title": ""
},
{
"docid": "07b362c7f6e941513cfbafce1ba87db1",
"text": "ResearchGate is increasingly used by scholars to upload the full-text of their articles and make them freely available for everyone. This study aims to investigate the extent to which ResearchGate members as authors of journal articles comply with publishers’ copyright policies when they self-archive full-text of their articles on ResearchGate. A random sample of 500 English journal articles available as full-text on ResearchGate were investigated. 108 articles (21.6%) were open access (OA) published in OA journals or hybrid journals. Of the remaining 392 articles, 61 (15.6%) were preprint, 24 (6.1%) were post-print and 307 (78.3%) were published (publisher) PDF. The key finding was that 201 (51.3%) out of 392 non-OA articles infringed the copyright and were non-compliant with publishers’ policy. While 88.3% of journals allowed some form of self-archiving (SHERPA/RoMEO green, blue or yellow journals), the majority of non-compliant cases (97.5%) occurred when authors self-archived publishers’ PDF files (final published version). This indicates that authors infringe copyright most of the time not because they are not allowed to self-archive, but because they use the wrong version, which might imply their lack of understanding of copyright policies and/or complexity and diversity of policies.",
"title": ""
},
{
"docid": "8be48d08aec21ecdf8a124fa3fef8d48",
"text": "Topic modeling has become a widely used tool for document management. However, there are few topic models distinguishing the importance of documents on different topics. In this paper, we propose a framework LIMTopic to incorporate link based importance into topic modeling. To instantiate the framework, RankTopic and HITSTopic are proposed by incorporating topical pagerank and topical HITS into topic modeling respectively. Specifically, ranking methods are first used to compute the topical importance of documents. Then, a generalized relation is built between link importance and topic modeling. We empirically show that LIMTopic converges after a small number of iterations in most experimental settings. The necessity of incorporating link importance into topic modeling is justified based on KL-Divergences between topic distributions converted from topical link importance and those computed by basic topic models. To investigate the document network summarization performance of topic models, we propose a novel measure called log-likelihood of ranking-integrated document-word matrix. Extensive experimental results show that LIMTopic performs better than baseline models in generalization performance, document clustering and classification, topic interpretability and document network summarization performance. Moreover, RankTopic has comparable performance with relational topic model (RTM) and HITSTopic performs much better than baseline models in document clustering and classification.",
"title": ""
},
{
"docid": "dcada3c12fb14b454964b97b8541b69d",
"text": "nce ch; n ple iray r. In hue 003 Abstract. We present a comparison between two color equalization algorithms: Retinex, the famous model due to Land and McCann, and Automatic Color Equalization (ACE), a new algorithm recently presented by the authors. These two algorithms share a common approach to color equalization, but different computational models. We introduce the two models focusing on differences and common points. An analysis of their computational characteristics illustrates the way the Retinex approach has influenced ACE structure, and which aspects of the first algorithm have been modified in the second one and how. Their interesting equalization properties, like lightness and color constancy, image dynamic stretching, global and local filtering, and data driven dequantization, are qualitatively and quantitatively presented and compared, together with their ability to mimic the human visual system. © 2004 SPIE and IS&T. [DOI: 10.1117/1.1635366]",
"title": ""
},
{
"docid": "2f8a6dcaeea91ef5034908b5bab6d8d3",
"text": "Web-based social systems enable new community-based opportunities for participants to engage, share, and interact. This community value and related services like search and advertising are threatened by spammers, content polluters, and malware disseminators. In an effort to preserve community value and ensure longterm success, we propose and evaluate a honeypot-based approach for uncovering social spammers in online social systems. Two of the key components of the proposed approach are: (1) The deployment of social honeypots for harvesting deceptive spam profiles from social networking communities; and (2) Statistical analysis of the properties of these spam profiles for creating spam classifiers to actively filter out existing and new spammers. We describe the conceptual framework and design considerations of the proposed approach, and we present concrete observations from the deployment of social honeypots in MySpace and Twitter. We find that the deployed social honeypots identify social spammers with low false positive rates and that the harvested spam data contains signals that are strongly correlated with observable profile features (e.g., content, friend information, posting patterns, etc.). Based on these profile features, we develop machine learning based classifiers for identifying previously unknown spammers with high precision and a low rate of false positives.",
"title": ""
},
{
"docid": "60de343325a305b08dfa46336f2617b5",
"text": "On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.",
"title": ""
},
{
"docid": "c6f3d4b2a379f452054f4220f4488309",
"text": "3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions (in-the-wild). In this paper, we propose the first, to the best of our knowledge, in-the-wild 3DMM by combining a powerful statistical model of facial shape, which describes both identity and expression, with an in-the-wild texture model. We show that the employment of such an in-the-wild texture model greatly simplifies the fitting procedure, because there is no need to optimise with regards to the illumination parameters. Furthermore, we propose a new fast algorithm for fitting the 3DMM in arbitrary images. Finally, we have captured the first 3D facial database with relatively unconstrained conditions and report quantitative evaluations with state-of-the-art performance. Complementary qualitative reconstruction results are demonstrated on standard in-the-wild facial databases.",
"title": ""
},
{
"docid": "0e514c165e362de91764f3ddd2a09e15",
"text": "The authors examined how networks of teams integrate their efforts to succeed collectively. They proposed that integration processes used to align efforts among multiple teams are important predictors of multiteam performance. The authors used a multiteam system (MTS) simulation to assess how both cross-team and within-team processes relate to MTS performance over multiple performance episodes that differed in terms of required interdependence levels. They found that cross-team processes predicted MTS performance beyond that accounted for by within-team processes. Further, cross-team processes were more important for MTS effectiveness when there were high cross-team interdependence demands as compared with situations in which teams could work more independently. Results are discussed in terms of extending theory and applications from teams to multiteam systems.",
"title": ""
},
{
"docid": "033553066cafa5c777bfab564a957c17",
"text": "BACKGROUND\nBased on evidence that psychologic distress often goes unrecognized although it is common among cancer patients, clinical practice guidelines recommend routine screening for distress. For this study, the authors sought to determine whether the single-item Distress Thermometer (DT) compared favorably with longer measures currently used to screen for distress.\n\n\nMETHODS\nPatients (n = 380) who were recruited from 5 sites completed the DT and identified the presence or absence of 34 problems using a standardized list. Participants also completed the 14-item Hospital Anxiety and Depression Scale (HADS) and an 18-item version of the Brief Symptom Inventory (BSI-18), both of which have established cutoff scores for identifying clinically significant distress.\n\n\nRESULTS\nReceiver operating characteristic (ROC) curve analyses of DT scores yielded area under the curve estimates relative to the HADS cutoff score (0.80) and the BSI-18 cutoff scores (0.78) indicative of good overall accuracy. ROC analyses also showed that a DT cutoff score of 4 had optimal sensitivity and specificity relative to both the HADS and BSI-18 cutoff scores. Additional analyses indicated that, compared with patients who had DT scores < 4, patients who had DT scores > or = 4 were more likely to be women, have a poorer performance status, and report practical, family, emotional, and physical problems (P < or = 0.05).\n\n\nCONCLUSIONS\nFindings confirm that the single-item DT compares favorably with longer measures used to screen for distress. A DT cutoff score of 4 yielded optimal sensitivity and specificity in a general cancer population relative to established cutoff scores on longer measures. The use of this cutoff score identified patients with a range of problems that were likely to reflect psychologic distress.",
"title": ""
},
{
"docid": "8f0805ba67919e349f2cd506378a5171",
"text": "Cycloastragenol (CAG) is an aglycone of astragaloside IV. It was first identified when screening Astragalus membranaceus extracts for active ingredients with antiaging properties. The present study demonstrates that CAG stimulates telomerase activity and cell proliferation in human neonatal keratinocytes. In particular, CAG promotes scratch wound closure of human neonatal keratinocyte monolayers in vitro. The distinct telomerase-activating property of CAG prompted evaluation of its potential application in the treatment of neurological disorders. Accordingly, CAG induced telomerase activity and cAMP response element binding (CREB) activation in PC12 cells and primary neurons. Blockade of CREB expression in neuronal cells by RNA interference reduced basal telomerase activity, and CAG was no longer efficacious in increasing telomerase activity. CAG treatment not only induced the expression of bcl2, a CREB-regulated gene, but also the expression of telomerase reverse transcriptase in primary cortical neurons. Interestingly, oral administration of CAG for 7 days attenuated depression-like behavior in experimental mice. In conclusion, CAG stimulates telomerase activity in human neonatal keratinocytes and rat neuronal cells, and induces CREB activation followed by tert and bcl2 expression. Furthermore, CAG may have a novel therapeutic role in depression.",
"title": ""
}
] | scidocsrr |
75e6108558b653a0b4dfb0a5bd0a4272 | Exhausted Parents: Development and Preliminary Validation of the Parental Burnout Inventory | [
{
"docid": "f84f279b6ef3b112a0411f5cba82e1b0",
"text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed",
"title": ""
},
{
"docid": "c90ab409ea2a9726f6ddded45e0fdea9",
"text": "About a decade ago, the Adult Attachment Interview (AAI; C. George, N. Kaplan, & M. Main, 1985) was developed to explore parents' mental representations of attachment as manifested in language during discourse of childhood experiences. The AAI was intended to predict the quality of the infant-parent attachment relationship, as observed in the Ainsworth Strange Situation, and to predict parents' responsiveness to their infants' attachment signals. The current meta-analysis examined the available evidence with respect to these predictive validity issues. In regard to the 1st issue, the 18 available samples (N = 854) showed a combined effect size of 1.06 in the expected direction for the secure vs. insecure split. For a portion of the studies, the percentage of correspondence between parents' mental representation of attachment and infants' attachment security could be computed (the resulting percentage was 75%; kappa = .49, n = 661). Concerning the 2nd issue, the 10 samples (N = 389) that were retrieved showed a combined effect size of .72 in the expected direction. According to conventional criteria, the effect sizes are large. It was concluded that although the predictive validity of the AAI is a replicated fact, there is only partial knowledge of how attachment representations are transmitted (the transmission gap).",
"title": ""
},
{
"docid": "feafd64c9f81b07f7f616d2e36e15e0c",
"text": "Burnout is a prolonged response to chronic emotional and interpersonal stressors on the job, and is defined by the three dimensions of exhaustion, cynicism, and inefficacy. The past 25 years of research has established the complexity of the construct, and places the individual stress experience within a larger organizational context of people's relation to their work. Recently, the work on burnout has expanded internationally and has led to new conceptual models. The focus on engagement, the positive antithesis of burnout, promises to yield new perspectives on interventions to alleviate burnout. The social focus of burnout, the solid research basis concerning the syndrome, and its specific ties to the work domain make a distinct and valuable contribution to people's health and well-being.",
"title": ""
}
] | [
{
"docid": "c5122000c9d8736cecb4d24e6f56aab8",
"text": "New credit cards containing Europay, MasterCard and Visa (EMV) chips for enhanced security used in-store purchases rather than online purchases have been adopted considerably. EMV supposedly protects the payment cards in such a way that the computer chip in a card referred to as chip-and-pin cards generate a unique one time code each time the card is used. The one time code is designed such that if it is copied or stolen from the merchant system or from the system terminal cannot be used to create a counterfeit copy of that card or counterfeit chip of the transaction. However, in spite of this design, EMV technology is not entirely foolproof from failure. In this paper we discuss the issues, failures and fraudulent cases associated with EMV Chip-And-Card technology.",
"title": ""
},
{
"docid": "ba7cb71cf07765f915d548f2a01e7b98",
"text": "Existing data storage systems offer a wide range of functionalities to accommodate an equally diverse range of applications. However, new classes of applications have emerged, e.g., blockchain and collaborative analytics, featuring data versioning, fork semantics, tamper-evidence or any combination thereof. They present new opportunities for storage systems to efficiently support such applications by embedding the above requirements into the storage. In this paper, we present ForkBase, a storage engine designed for blockchain and forkable applications. By integrating core application properties into the storage, ForkBase not only delivers high performance but also reduces development effort. The storage manages multiversion data and supports two variants of fork semantics which enable different fork worklflows. ForkBase is fast and space efficient, due to a novel index class that supports efficient queries as well as effective detection of duplicate content across data objects, branches and versions. We demonstrate ForkBase’s performance using three applications: a blockchain platform, a wiki engine and a collaborative analytics application. We conduct extensive experimental evaluation against respective state-of-the-art solutions. The results show that ForkBase achieves superior performance while significantly lowering the development effort. PVLDB Reference Format: Sheng Wang, Tien Tuan Anh Dinh, Qian Lin, Zhongle Xie, Meihui Zhang, Qingchao Cai, Gang Chen, Beng Chin Ooi, Pingcheng Ruan. ForkBase: An Efficient Storage Engine for Blockchain and Forkable Applications. PVLDB, 11(10): 1137-1150, 2018. DOI: https://doi.org/10.14778/3231751.3231762",
"title": ""
},
{
"docid": "4bce4bc5fde90ed5448ee6361a9534ff",
"text": "Much of human dialogue occurs in semicooperative settings, where agents with different goals attempt to agree on common decisions. Negotiations require complex communication and reasoning skills, but success is easy to measure, making this an interesting task for AI. We gather a large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other’s reward functions must reach an agreement (or a deal) via natural language dialogue. For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states. We also introduce dialogue rollouts, in which the model plans ahead by simulating possible complete continuations of the conversation, and find that this technique dramatically improves performance. Our code and dataset are publicly available.1",
"title": ""
},
{
"docid": "debb7f6f8e00b536dd823c4b513f5950",
"text": "It is known that in the Tower of Ha noi graphs there are at most two different shortest paths between any fixed pair of vertices. A formula is given that counts, for a given vertex v, thenumber of verticesu such that there are two shortest u, v-paths. The formul a is expressed in terms of Stern’s diatomic sequenceb(n) (n ≥ 0) and implies that only for vertices of degree two this number is zero. Plane embeddings of the Tower of Hanoi graphs are also presented that provide an explicit description ofb(n) as the number of elements of the sets of vertices of the Tower of Hanoi graphs intersected by certain lines in the plane. © 2004 Elsevier Ltd. All rights reserved. MSC (2000):05A15; 05C12; 11B83; 51M15",
"title": ""
},
{
"docid": "f01545609634b5aab7e3c1406f93046c",
"text": "We present a neural network architecture based on bidirectional LSTMs to compute representations of words in the sentential contexts. These context-sensitive word representations are suitable for, e.g., distinguishing different word senses and other context-modulated variations in meaning. To learn the parameters of our model, we use cross-lingual supervision, hypothesizing that a good representation of a word in context will be one that is sufficient for selecting the correct translation into a second language. We evaluate the quality of our representations as features in three downstream tasks: prediction of semantic supersenses (which assign nouns and verbs into a few dozen semantic classes), low resource machine translation, and a lexical substitution task, and obtain state-of-the-art results on all of these.",
"title": ""
},
{
"docid": "1285bd50bb6462b9864d61a59e77435e",
"text": "Precision Agriculture is advancing but not as fast as predicted 5 years ago. The development of proper decision-support systems for implementing precision decisions remains a major stumbling block to adoption. Other critical research issues are discussed, namely, insufficient recognition of temporal variation, lack of whole-farm focus, crop quality assessment methods, product tracking and environmental auditing. A generic research programme for precision agriculture is presented. A typology of agriculture countries is introduced and the potential of each type for precision agriculture discussed.",
"title": ""
},
{
"docid": "e7b1d82b6716434da8bbeeeec895dac4",
"text": "Grapevine is the one of the most important fruit species in the world. Comparative genome sequencing of grape cultivars is very important for the interpretation of the grape genome and understanding its evolution. The genomes of four Georgian grape cultivars—Chkhaveri, Saperavi, Meskhetian green, and Rkatsiteli, belonging to different haplogroups, were resequenced. The shotgun genomic libraries of grape cultivars were sequenced on an Illumina HiSeq. Pinot Noir nuclear, mitochondrial, and chloroplast DNA were used as reference. Mitochondrial DNA of Chkhaveri closely matches that of the reference Pinot noir mitochondrial DNA, with the exception of 16 SNPs found in the Chkhaveri mitochondrial DNA. The number of SNPs in mitochondrial DNA from Saperavi, Meskhetian green, and Rkatsiteli was 764, 702, and 822, respectively. Nuclear DNA differs from the reference by 1,800,675 nt in Chkhaveri, 1,063,063 nt in Meskhetian green, 2,174,995 in Saperavi, and 5,011,513 in Rkatsiteli. Unlike mtDNA Pinot noir, chromosomal DNA is closer to the Meskhetian green than to other cultivars. Substantial differences in the number of SNPs in mitochondrial and nuclear DNA of Chkhaveri and Pinot noir cultivars are explained by backcrossing or introgression of their wild predecessors before or during the process of domestication. Annotation of chromosomal DNA of Georgian grape cultivars by MEGANTE, a web-based annotation system, shows 66,745 predicted genes (Chkhaveri—17,409; Saperavi—17,021; Meskhetian green—18,355; and Rkatsiteli—13,960). Among them, 106 predicted genes and 43 pseudogenes of terpene synthase genes were found in chromosomes 12, 18 random (18R), and 19. Four novel TPS genes not present in reference Pinot noir DNA were detected. Two of them—germacrene A synthase (Chromosome 18R) and (−) germacrene D synthase (Chromosome 19) can be identified as putatively full-length proteins. This work performs the first attempt of the comparative whole genome analysis of different haplogroups of Vitis vinifera cultivars. Based on complete nuclear and mitochondrial DNA sequence analysis, hypothetical phylogeny scheme of formation of grape cultivars is presented.",
"title": ""
},
{
"docid": "2b6b8098ea397f85554113a42876f368",
"text": "Teacher efficacy has proved to be powerfully related to many meaningful educational outcomes such as teachers’ persistence, enthusiasm, commitment and instructional behavior, as well as student outcomes such as achievement, motivation, and self-efficacy beliefs. However, persistent measurement problems have plagued those who have sought to study teacher efficacy. We review many of the major measures that have been used to capture the construct, noting problems that have arisen with each. We then propose a promising new measure of teacher efficacy along with validity and reliability data from three separate studies. Finally, new directions for research made possible by this instrument are explored. r 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e592ccd706b039b12cc4e724a7b217cd",
"text": "In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed mini-batch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. In other words, speed and robustness takes precedence over precision. We analyze the protocol theoretically as well as experimentally based on churn statistics from a real smartphone trace. We derive a sufficient condition for preventing the leakage of an individual value, and we demonstrate the feasibility of the overhead of the protocol.",
"title": ""
},
{
"docid": "6a4437fa8a5a764d99ed5471401f5ce4",
"text": "There is disagreement in the literature about the exact nature of the phenomenon of empathy. There are emotional, cognitive, and conditioning views, applying in varying degrees across species. An adequate description of the ultimate and proximate mechanism can integrate these views. Proximately, the perception of an object's state activates the subject's corresponding representations, which in turn activate somatic and autonomic responses. This mechanism supports basic behaviors (e.g., alarm, social facilitation, vicariousness of emotions, mother-infant responsiveness, and the modeling of competitors and predators) that are crucial for the reproductive success of animals living in groups. The Perception-Action Model (PAM), together with an understanding of how representations change with experience, can explain the major empirical effects in the literature (similarity, familiarity, past experience, explicit teaching, and salience). It can also predict a variety of empathy disorders. The interaction between the PAM and prefrontal functioning can also explain different levels of empathy across species and age groups. This view can advance our evolutionary understanding of empathy beyond inclusive fitness and reciprocal altruism and can explain different levels of empathy across individuals, species, stages of development, and situations.",
"title": ""
},
{
"docid": "73128099f3ddd19e4f88d10cdafbd506",
"text": "BACKGROUND\nRecently, there has been an increased interest in the effects of essential oils on athletic performances and other physiological effects. This study aimed to assess the effects of Citrus sinensis flower and Mentha spicata leaves essential oils inhalation in two different groups of athlete male students on their exercise performance and lung function.\n\n\nMETHODS\nTwenty physical education students volunteered to participate in the study. The subjects were randomly assigned into two groups: Mentha spicata and Citrus sinensis (ten participants each). One group was nebulized by Citrus sinensis flower oil and the other by Mentha spicata leaves oil in a concentration of (0.02 ml/kg of body mass) which was mixed with 2 ml of normal saline for 5 min before a 1500 m running tests. Lung function tests were measured using a spirometer for each student pre and post nebulization giving the same running distance pre and post oils inhalation.\n\n\nRESULTS\nA lung function tests showed an improvement on the lung status for the students after inhaling of the oils. Interestingly, there was a significant increase in Forced Expiratory Volume in the first second and Forced Vital Capacity after inhalation for the both oils. Moreover significant reductions in the means of the running time were observed among these two groups. The normal spirometry results were 50 %, while after inhalation with M. spicata oil the ratio were 60 %.\n\n\nCONCLUSION\nOur findings support the effectiveness of M. spicata and C. sinensis essential oils on the exercise performance and respiratory function parameters. However, our conclusion and generalisability of our results should be interpreted with caution due to small sample size and lack of control groups, randomization or masking. We recommend further investigations to explain the mechanism of actions for these two essential oils on exercise performance and respiratory parameters.\n\n\nTRIAL REGISTRATION\nISRCTN10133422, Registered: May 3, 2016.",
"title": ""
},
{
"docid": "4a26afba58270d7ce1a0eb50bd659eae",
"text": "Recommendation can be reduced to a sub-problem of link prediction, with specific nodes (users and items) and links (similar relations among users/items, and interactions between users and items). However, the previous link prediction algorithms need to be modified to suit the recommendation cases since they do not consider the separation of these two fundamental relations: similar or dissimilar and like or dislike. In this paper, we propose a novel and unified way to solve this problem, which models the relation duality using complex number. Under this representation, the previous works can directly reuse. In experiments with the Movie Lens dataset and the Android software website AppChina.com, the presented approach achieves significant performance improvement comparing with other popular recommendation algorithms both in accuracy and coverage. Besides, our results revealed some new findings. First, it is observed that the performance is improved when the user and item popularities are taken into account. Second, the item popularity plays a more important role than the user popularity does in final recommendation. Since its notable performance, we are working to apply it in a commercial setting, AppChina.com website, for application recommendation.",
"title": ""
},
{
"docid": "749cfda68d5d7f09c0861dc723563db9",
"text": "BACKGROUND\nOnline social networking use has been integrated into adolescents' daily life and the intensity of online social networking use may have important consequences on adolescents' well-being. However, there are few validated instruments to measure social networking use intensity. The present study aims to develop the Social Networking Activity Intensity Scale (SNAIS) and validate it among junior middle school students in China.\n\n\nMETHODS\nA total of 910 students who were social networking users were recruited from two junior middle schools in Guangzhou, and 114 students were retested after two weeks to examine the test-retest reliability. The psychometrics of the SNAIS were estimated using appropriate statistical methods.\n\n\nRESULTS\nTwo factors, Social Function Use Intensity (SFUI) and Entertainment Function Use Intensity (EFUI), were clearly identified by both exploratory and confirmatory factor analyses. No ceiling or floor effects were observed for the SNAIS and its two subscales. The SNAIS and its two subscales exhibited acceptable reliability (Cronbach's alpha = 0.89, 0.90 and 0.60, and test-retest Intra-class Correlation Coefficient = 0.85, 0.87 and 0.67 for Overall scale, SFUI and EFUI subscale, respectively, p<0.001). As expected, the SNAIS and its subscale scores were correlated significantly with emotional connection to social networking, social networking addiction, Internet addiction, and characteristics related to social networking use.\n\n\nCONCLUSIONS\nThe SNAIS is an easily self-administered scale with good psychometric properties. It would facilitate more research in this field worldwide and specifically in the Chinese population.",
"title": ""
},
{
"docid": "772193675598233ba1ab60936b3091d4",
"text": "The proposed quasiresonant control scheme can be widely used in a dc-dc flyback converter because it can achieve high efficiency with minimized external components. The proposed dynamic frequency selector improves conversion efficiency especially at light loads to meet the requirement of green power since the converter automatically switches to the discontinuous conduction mode for reducing the switching frequency and the switching power loss. Furthermore, low quiescent current can be guaranteed by the constant current startup circuit to further reduce power loss after the startup procedure. The test chip fabricated in VIS 0.5 μm 500 V UHV process occupies an active silicon area of 3.6 mm 2. The peak efficiency can achieve 92% at load of 80 W and 85% efficiency at light load of 5 W.",
"title": ""
},
{
"docid": "903d67c31159d95921f160700d876cf2",
"text": "Second Life (SL) is currently the most mature and popular multi-user virtual world platform being used in education. Through an in-depth examination of SL, this article explores its potential and the barriers that multi-user virtual environments present to educators wanting to use immersive 3-D spaces in their teaching. The context is set by tracing the history of virtual worlds back to early multi-user online computer gaming environments and describing the current trends in the development of 3-D immersive spaces. A typology for virtual worlds is developed and the key features that have made unstructured 3-D spaces so attractive to educators are described. The popularity in use of SL is examined through three critical components of the virtual environment experience: technical, immersive and social. From here, the paper discusses the affordances that SL offers for educational activities and the types of teaching approaches that are being explored by institutions. The work concludes with a critical analysis of the barriers to successful implementation of SL as an educational tool and maps a number of developments that are underway to address these issues across virtual worlds more broadly. Introduction The story of virtual worlds is one that cannot be separated from technological change. As we witness increasing maturity and convergence in broadband, wireless computing, British Journal of Educational Technology Vol 40 No 3 2009 414–426 doi:10.1111/j.1467-8535.2009.00952.x © 2009 The Author. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. video and audio technologies, we see virtual immersive environments becoming more practical and useable. In this article, I review the present socio-technical environment of virtual worlds, and draw on an analysis of Second Life (SL) to outline the potential for and the barriers to successful implementation of 3-D immersive spaces in education. Virtual worlds have existed in some form since the early 1980s, but their absolute definition remains contested. This reflects the general nature of a term that draws on multiple writings of the virtual and the difficulties in attempting to fix descriptions in an area that is undergoing persistent technological development. The numerous contextual descriptions that have appeared, from the perspectives of writers, academics, industry professionals and the media, have further complicated agreement on a common understanding of virtual worlds. Bell (2008) has approached this problem by suggesting a combined definition based on the work of Bartle (2004), Castronova (2004) and Koster (2004), drawing the work together using key terms that relate to: synchronicity, persistence, network of people, avatar representation and facilitation of the experience by networked computers. But perhaps the most satisfying and simplest insight comes from Schroeder (1996, 2008) who has consistently argued that virtual environments and virtual reality technologies should be defined as: A computer-generated display that allows or compels the user (or users) to have a sense of being present in an environment other than the one they are actually in, and to interact with that environment (Schroeder, 1996, p. 25) In other words, a virtual world provides an experience set within a technological environment that gives the user a strong sense of being there. The multi-user virtual environments (MUVEs) of today share common features that reflect their roots in the gaming worlds of multi-user dungeons and massively multiplayer online games (MMOs), made more popular in recent times through titles such as NeverWinter Nights and World of Warcraft, both based on the Dungeons and Dragons genre of role-playing game. Virtual worlds may appear in different forms yet they possess a number of recurrent features that include: • persistence of the in-world environment • a shared space allowing multiple users to participate simultaneously • virtual embodiment in the form of an avatar (a personisable 3-D representation of the self) • interactions that occur between users and objects in a 3-D environment • an immediacy of action such that interactions occur in real time • similarities to the real world such as topography, movement and physics that provide the illusion of being there. (Smart, Cascio & Paffendof, 2007) These are features compelling enough to attract more than 300 million registered users to spend part of their time within commercial social and gaming virtual worlds (Hays, 2008). Second Life in higher education 415 © 2009 The Author. Journal compilation © 2009 Becta. From MMOs and MUVEs to SL What marks a significant difference between MUVEs and MMOs is the lack of a predetermined narrative or plot-driven storyline. In the worlds exemplified by SL, there is no natural purpose unless one is created or built. Here, social interaction exists not as a precursor to goal-oriented action but rather, it occurs within an open-ended system that offers a number of freedoms to the player, such as: the creation and ownership of objects; the creation of interpersonal networks; and monetary transactions that occur within a tangible economic structure (Castronova, 2004; Ludlow & Wallace, 2007). It is primarily this open-endedness, combined with the ability to create content and shape the virtual environment in an almost infinite number of ways, which has attracted educators to the possibilities afforded by immersive 3-D spaces. A typology of virtual worlds Within the broad panorama of virtual environments, we can find offerings from both open source projects and proprietary vendors. These include the worlds of OpenSim, Croquet Consortium, ActiveWorlds, Project Wonderland, There, Olive and Twinity. We can identify a number of approaches to platform development and delivery each defined by their perceived target audience. For example, Olive specifically markets itself towards large institutions and enterprise-level productivity. MUVEs, therefore, can be categorised in a number of ways. In the typology shown in Table 1, a number of extant 3-D virtual worlds are grouped by their narrative approach and 3-D representational system. There are several alternative categorisations that have been suggested. Messinger, Stroulia and Lyons (2008) builds their typology on Porter’s (2004) original typology of virtual communities where the five key elements of purpose, place, platform, population and profit models are identified. Messenger uses this alternative typology productively to help identify the historic antecedents of virtual worlds, their future applications and topics for future research. What both these typologies demonstrate is that there is a range of virtual worlds, which offer distinctly different settings in which to site educational interventions. Within the typology outlined in Table 1, concrete educational activity can be identified in all four of the virtual world categories listed. The boundaries between these categories are soft and reflect the flexibility of some virtual worlds to provide more than one form of use. This is particularly true of SL, and has contributed to this platform’s high profile in comparison to other contemporary MUVEs. Although often defined as a 3-D social networking space, SL also supports role-playing game communities and some degree of cooperative workflow through the in-world tools and devices that have been built by residents. SL as the platform of choice for education SL represents the most mature of the social virtual world platforms, and the high usage figures compared with other competing platforms reflects this dominance within the educational world. The regular Eduserv virtual worlds survey conducted among UK tertiary educators has identified SL as the most popular educational MUVE: 416 British Journal of Educational Technology Vol 40 No 3 2009 © 2009 The Author. Journal compilation © 2009 Becta. Ta bl e 1 : A ty po lo gy of 3 -D vi rt ua lw or ld s (a da pt ed fr om M cK eo w n, 2 0 0 7 ) Fl ex ib le na rr at iv e So ci al w or ld Si m ul at io n W or ks pa ce G am es (M M P O R G s) an d se ri ou s ga m es So ci al pl at fo rm s, 3 -D ch at ro om s an d vi rt u al w or ld ge n er at or s Si m u la ti on s or re fle ct io n s of th e ‘r ea l’ 3 -D re al is at io n of C SC W s W or ld of W ar cr af t N ev er W in te r N ig ht s",
"title": ""
},
{
"docid": "9c637dff0539c6a80ecceb8e9fa9d567",
"text": "Learning the stress patterns of English words presents a challenge for L1 speakers from syllable-timed and/or tone languages. Realization of stress contrasts in previous studies has been measured in a variety of ways. This study adapts and extends Pairwise Variability Index (PVI), a method generally used to measure duration as a property of speech rhythm, to compare F0 and amplitude contrasts across L1 and L2 production of stressed and unstressed syllables in English multisyllabic words. L1 North American English and L1 Taiwan-Mandarin English speech data were extracted from the AESOP-ILAS corpus. Results of acoustic analysis show that overall, stress contrasts were realized most robustly by L1 English speakers. A general pattern of contrast underdifferentiation was found in L2 speakers with respect to F0, duration and intensity, with the most striking difference found in F0. These results corroborate our earlier findings on L1 Mandarin speakers’ production of on-focus/post-focus contrasts in their realization of English narrow focus. Taken together, these results demonstrate that underdifferentiation of prosodic contrasts at both the lexical and phrase levels is a major prosodic feature of Taiwan English; future research will determine whether it can also be found in the L2 English of other syllable-timed or tone language speakers.",
"title": ""
},
{
"docid": "d658b95cc9dc81d0dbb3918795ccab50",
"text": "A brain–computer interface (BCI) is a communication channel which does not depend on the brain’s normal output pathways of peripheral nerves and muscles [1–3]. It supplies paralyzed patients with a new approach to communicate with the environment. Among various brain monitoring methods employed in current BCI research, electroencephalogram (EEG) is the main interest due to its advantages of low cost, convenient operation and non-invasiveness. In present-day EEG-based BCIs, the following signals have been paid much attention: visual evoked potential (VEP), sensorimotor mu/beta rhythms, P300 evoked potential, slow cortical potential (SCP), and movement-related cortical potential (MRCP). Details about these signals can be found in chapter “Brain Signals for Brain–Computer Interfaces”. These systems offer some practical solutions (e.g., cursor movement and word processing) for patients with motor disabilities. In this chapter, practical designs of several BCIs developed in Tsinghua University will be introduced. First of all, we will propose the paradigm of BCIs based on the modulation of EEG rhythms and challenges confronting practical system designs. In Sect. 2, modulation and demodulation methods of EEG rhythms will be further explained. Furthermore, practical designs of a VEP-based BCI and a motor imagery based BCI will be described in Sect. 3. Finally, Sect. 4 will present some real-life application demos using these practical BCI systems.",
"title": ""
},
{
"docid": "e0e00fdfecc4a23994315579938f740e",
"text": "Budget allocation in online advertising deals with distributing the campaign (insertion order) level budgets to different sub-campaigns which employ different targeting criteria and may perform differently in terms of return-on-investment (ROI). In this paper, we present the efforts at Turn on how to best allocate campaign budget so that the advertiser or campaign-level ROI is maximized. To do this, it is crucial to be able to correctly determine the performance of sub-campaigns. This determination is highly related to the action-attribution problem, i.e. to be able to find out the set of ads, and hence the sub-campaigns that provided them to a user, that an action should be attributed to. For this purpose, we employ both last-touch (last ad gets all credit) and multi-touch (many ads share the credit) attribution methodologies. We present the algorithms deployed at Turn for the attribution problem, as well as their parallel implementation on the large advertiser performance datasets. We conclude the paper with our empirical comparison of last-touch and multi-touch attribution-based budget allocation in a real online advertising setting.",
"title": ""
},
{
"docid": "f53d13eeccff0048fc96e532a52a2154",
"text": "The physical principles underlying some current biomedical applications of magnetic nanoparticles are reviewed. Starting from well-known basic concepts, and drawing on examples from biology and biomedicine, the relevant physics of magnetic materials and their responses to applied magnetic fields are surveyed. The way these properties are controlled and used is illustrated with reference to (i) magnetic separation of labelled cells and other biological entities; (ii) therapeutic drug, gene and radionuclide delivery; (iii) radio frequency methods for the catabolism of tumours via hyperthermia; and (iv) contrast enhancement agents for magnetic resonance imaging applications. Future prospects are also discussed.",
"title": ""
},
{
"docid": "10959ca4eaa8d8a44629255e98e104da",
"text": "Millimeter-wave (mm-wave) wireless local area networks (WLANs) are expected to provide multi-Gbps connectivity by exploiting the large amount of unoccupied spectrum in e.g. the unlicensed 60 GHz band. However, to overcome the high path loss inherent at these high frequencies, mm-wave networks must employ highly directional beamforming antennas, which makes link establishment and maintenance much more challenging than in traditional omnidirectional networks. In particular, maintaining connectivity under node mobility necessitates frequent re-steering of the transmit and receive antenna beams to re-establish a directional mm-wave link. A simple exhaustive sequential scanning to search for new feasible antenna sector pairs may introduce excessive delay, potentially disrupting communication and lowering the QoS. In this paper, we propose a smart beam steering algorithm for fast 60 GHz link re-establishment under node mobility, which uses knowledge of previous feasible sector pairs to narrow the sector search space, thereby reducing the associated latency overhead. We evaluate the performance of our algorithm in several representative indoor scenarios, based on detailed simulations of signal propagation in a 60 GHz WLAN in WinProp with realistic building materials. We study the effect of indoor layout, antenna sector beamwidth, node mobility pattern, and device orientation awareness. Our results show that the smart beam steering algorithm achieves a 7-fold reduction of the sector search space on average, which directly translates into lower 60 GHz link re-establishment latency. Our results also show that our fast search algorithm selects the near-optimal antenna sector pair for link re-establishment.",
"title": ""
}
] | scidocsrr |
d91a04fca7ac7f25afdb6b5b17dfb53e | Combining Multiple Criteria and Multidimension for Movie Recommender System | [
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
}
] | [
{
"docid": "82535c102f41dc9d47aa65bd71ca23be",
"text": "We report on an experiment that examined the influence of anthropomorphism and perceived agency on presence, copresence, and social presence in a virtual environment. The experiment varied the level of anthropomorphism of the image of interactants: high anthropomorphism, low anthropomorphism, or no image. Perceived agency was manipulated by telling the participants that the image was either an avatar controlled by a human, or an agent controlled by a computer. The results support the prediction that people respond socially to both human and computer-controlled entities, and that the existence of a virtual image increases tele-presence. Participants interacting with the less-anthropomorphic image reported more copresence and social presence than those interacting with partners represented by either no image at all or by a highly anthropomorphic image of the other, indicating that the more anthropomorphic images set up higher expectations that lead to reduced presence when these expectations were not met.",
"title": ""
},
{
"docid": "a2e8ece304e6300d399a4ef38d282623",
"text": "So, they had to be simple, apply to a broad range of systems, and yet exhibit good resolution. Obviously a simple task, but first let’s look at what it means to be autonomous. he recently released DoD Unmanned Aerial Vehicles map [9] discusses advancements in UAV autonomy in of autonomous control levels (ACL). The ACL concept pioneered by researchers in the Air Force Research ratory’s Air Vehicles Directorate who are charged with loping autonomous air vehicles. In the process of loping intelligent autonomous agents for UAV control ms we were constantly challenged to “tell us how omous a UAV is, and how do you think it can be ured...” Usually we hand-waved away the argument and d the questioner will go away since this is a very subjective, complicated, subject, but within the last year we’ve been ted to develop national intelligent autonomous UAV control cs an IQ test for the flyborgs, if you will. The ACL chart result. We’ve done this via intense discussions with other rnment labs and industry, and this paper covers the agreed cs (an extension of the OODA observe, orient, decide, and loop) as well as the precursors, “dead-ends”, and out-andlops investigated to get there. 2. Quick Difference Between Autonomous and Automatic (our definition) Many people don’t realize that there is a significant difference between the words autonomous and automatic. Many news and trade articles use these words interchangeably. Automatic means that a system will do exactly as programmed, it has no choice. Autonomous means that a system has a choice to make free of outside influence, i.e., an autonomous system has free will. For instance, let’s compare functions of an automatic system (autopilot) and an autonomous guidance system: • Autopilot: Stay on course chosen. • Autonomous Guidance: Decide which course to take, then stay on it. eywords: autonomy metrics, machine intelligence ics, UAV, autonomous control Example: a cruise missile is not autonomous, but automatic since all choices have been made prior to launch. Background p levels of the US Department of Defense an effort been initiated to coordinate researchers across the ices and industry in meeting national goals in fixedvehicle development. The Fixed-Wing Vehicle ative (FWV) has broad goals across numerous vehicle nologies. One of those areas is mission management AVs. Our broad goal is to develop the technology ing UAVs to replace human piloted aircraft for any eivable mission. This implies that we have to give s some level of autonomy to accomplish the ions. One of the cornerstones of the FWV process is stablishment of metrics so one know that a goal is hed, but what metrics were available for measuring autonomy? Our research, in conjunction with stry, determined that there was not any sort of metric e desired. Thus we set out to define our own [Note 3. We Need To Measure Autonomy, Not Intelligence For some reason people tend to equate autonomy to intelligence. Looking through the proceedings of the last NIST Intelligent Systems Workshop there are several papers which do this, and in fact, the entire conference sets the tone that “intelligence is autonomy” [3]. They are not the same. Many stupid things are quite autonomous (bacteria) and many very smart things are not (my 3 year old daughter seemingly most of the time). Intelligence (one of a myriad of definitions) is the capability of discovering knowledge and using it to do something. Autonomy is: • the ability to generate one’s own purposes without any instruction from outside (L. Fogel) what characteristics should these metrics have? We ded that they needed to be: • having free will (B. Clough) What we want to know is how well a UAV will do a task, or better yet, develop tasks to reach goals, when we’re not around to do it for the UAV. We really don’t care how intelligent it is, just that it does the job assigned. Therefore, intelligence measures tell us little. So, although we could talk about the Turing Test [1] and other intelligence metrics, that is not what we wanted. Easily visualized such that upper management could grasp the concepts in a couple of briefing slides. Broad enough to measure past, present and future autonomous system development. Have enough resolution to easily track impact of technological program investments. Report Documentation Page Form Approved",
"title": ""
},
{
"docid": "05a5620c883117fd45de32f124b32cc6",
"text": "The powerful and democratic activity of social tagging allows the wide set of Web users to add free annotations on resources. Tags express user interests, preferences and needs, but also automatically generate folksonomies. They can be considered as gold mine, especially for e-commerce applications, in order to provide effective recommendations. Thus, several recommender systems exploit folksonomies in this context. Folksonomies have also been involved in many information retrieval approaches. In considering that information retrieval and recommender systems are siblings, we notice that few works deal with the integration of their approaches, concepts and techniques to improve recommendation. This paper is a first attempt in this direction. We propose a trail through recommender systems, social Web, e-commerce and social commerce, tags and information retrieval: an overview on the methodologies, and a survey on folksonomy-based information retrieval from recommender systems point of view, delineating a set of open and new perspectives.",
"title": ""
},
{
"docid": "a791efe9d0414842f7d82e056beaa96f",
"text": "OBJECTIVE\nTo report the outcomes of 500 robotically assisted laparoscopic radical prostatectomies (RALPs), a minimally invasive alternative for treating prostate cancer.\n\n\nPATIENTS AND METHODS\nIn all, 500 patients had RALP over a 30-month period. A transperitoneal six-port approach was used in each case, with the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). Prospective data collection included quality-of-life questionnaires, basic demographics (height, weight and body mass index), prostate specific antigen (PSA) levels, clinical stage and Gleason grade. Variables assessed during RALP were operative duration, estimated blood loss (EBL) and complications, and after RALP were hospital stay, catheter time, pathology, PSA level, return of continence and potency.\n\n\nRESULTS\nThe mean (range) duration of RALP was 130 (51-330) min; all procedures were successful, with no intraoperative transfusions or deaths. The mean EBL was 10-300 mL; 97% of patients were discharged home on the first day after RALP with a mean haematocrit of 36%. The mean duration of catheterization was 6.9 (5-21) days. The positive margin rate was 9.4% for all patients; i.e. 2.5% for T2 tumours, 23% for T3a and 53% for T4. The overall biochemical recurrence free (PSA level<0.1 ng/mL) survival was 95% at mean follow-up of 9.7 months. There was complete continence at 3 and 6 months in 89% and 95% of patients, respectively. At 1 year 78% of patients were potent (with or without the use of oral medications), 15% were not yet able to sustain erections capable of intercourse, and another 7% still required injection therapy.\n\n\nCONCLUSION\nRALP is a safe, feasible and minimally invasive alternative for treating prostate cancer. Our initial experience with the procedure shows promising short-term outcomes.",
"title": ""
},
{
"docid": "2939531a61f319ace08f852f783e8734",
"text": "We pose the following question: what happens when test data not only differs from training data, but differs from it in a continually evolving way? The classic domain adaptation paradigm considers the world to be separated into stationary domains with clear boundaries between them. However, in many real-world applications, examples cannot be naturally separated into discrete domains, but arise from a continuously evolving underlying process. Examples include video with gradually changing lighting and spam email with evolving spammer tactics. We formulate a novel problem of adapting to such continuous domains, and present a solution based on smoothly varying embeddings. Recent work has shown the utility of considering discrete visual domains as fixed points embedded in a manifold of lower-dimensional subspaces. Adaptation can be achieved via transforms or kernels learned between such stationary source and target subspaces. We propose a method to consider non-stationary domains, which we refer to as Continuous Manifold Adaptation (CMA). We treat each target sample as potentially being drawn from a different subspace on the domain manifold, and present a novel technique for continuous transform-based adaptation. Our approach can learn to distinguish categories using training data collected at some point in the past, and continue to update its model of the categories for some time into the future, without receiving any additional labels. Experiments on two visual datasets demonstrate the value of our approach for several popular feature representations.",
"title": ""
},
{
"docid": "3840043afe85979eb901ad05b5b8952f",
"text": "Cross media retrieval systems have received increasing interest in recent years. Due to the semantic gap between low-level features and high-level semantic concepts of multimedia data, many researchers have explored joint-model techniques in cross media retrieval systems. Previous joint-model approaches usually focus on two traditional ways to design cross media retrieval systems: (a) fusing features from different media data; (b) learning different models for different media data and fusing their outputs. However, the process of fusing features or outputs will lose both low- and high-level abstraction information of media data. Hence, both ways do not really reveal the semantic correlations among the heterogeneous multimedia data. In this paper, we introduce a novel method for the cross media retrieval task, named Parallel Field Alignment Retrieval (PFAR), which integrates a manifold alignment framework from the perspective of vector fields. Instead of fusing original features or outputs, we consider the cross media retrieval as a manifold alignment problem using parallel fields. The proposed manifold alignment algorithm can effectively preserve the metric of data manifolds, model heterogeneous media data and project their relationship into intermediate latent semantic spaces during the process of manifold alignment. After the alignment, the semantic correlations are also determined. In this way, the cross media retrieval task can be resolved by the determined semantic correlations. Comprehensive experimental results have demonstrated the effectiveness of our approach.",
"title": ""
},
{
"docid": "f6e080319e7455fda0695f324941edcb",
"text": "The Internet of Things (IoT) is a distributed system of physical objects that requires the seamless integration of hardware (e.g., sensors, actuators, electronics) and network communications in order to collect and exchange data. IoT smart objects need to be somehow identified to determine the origin of the data and to automatically detect the elements around us. One of the best positioned technologies to perform identification is RFID (Radio Frequency Identification), which in the last years has gained a lot of popularity in applications like access control, payment cards or logistics. Despite its popularity, RFID security has not been properly handled in numerous applications. To foster security in such applications, this article includes three main contributions. First, in order to establish the basics, a detailed review of the most common flaws found in RFID-based IoT systems is provided, including the latest attacks described in the literature. Second, a novel methodology that eases the detection and mitigation of such flaws is presented. Third, the latest RFID security tools are analyzed and the methodology proposed is applied through one of them (Proxmark 3) to validate it. Thus, the methodology is tested in different scenarios where tags are commonly used for identification. In such systems it was possible to clone transponders, extract information, and even emulate both tags and readers. Therefore, it is shown that the methodology proposed is useful for auditing security and reverse engineering RFID communications in IoT applications. It must be noted that, although this paper is aimed at fostering RFID communications security in IoT applications, the methodology can be applied to any RFID communications protocol.",
"title": ""
},
{
"docid": "35dda859e176a0b53e3aead319d08ae1",
"text": "As our professional, social, and financial existences become increasingly digitized and as our government, healthcare, and military infrastructures rely more on computer technologies, they present larger and more lucrative targets for malware. Stealth malware in particular poses an increased threat because it is specifically designed to evade detection mechanisms, spreading dormant, in the wild for extended periods of time, gathering sensitive information or positioning itself for a high-impact zero-day attack. Policing the growing attack surface requires the development of efficient anti-malware solutions with improved generalization to detect novel types of malware and resolve these occurrences with as little burden on human experts as possible. In this paper, we survey malicious stealth technologies as well as existing solutions for detecting and categorizing these countermeasures autonomously. While machine learning offers promising potential for increasingly autonomous solutions with improved generalization to new malware types, both at the network level and at the host level, our findings suggest that several flawed assumptions inherent to most recognition algorithms prevent a direct mapping between the stealth malware recognition problem and a machine learning solution. The most notable of these flawed assumptions is the closed world assumption: that no sample belonging to a class outside of a static training set will appear at query time. We present a formalized adaptive open world framework for stealth malware recognition and relate it mathematically to research from other machine learning domains.",
"title": ""
},
{
"docid": "b35efe68d99331d481e439ae8fbb4a64",
"text": "Semantic matching (SM) for textual information can be informally defined as the task of effectively modeling text matching using representations more complex than those based on simple and independent set of surface forms of words or stems (typically indicated as bag-of-words). In this perspective, matching named entities (NEs) implies that the associated model can both overcomes mismatch between different representations of the same entities, e.g., George H. W. Bush vs. George Bush, and carry out entity disambiguation to avoid incorrect matches between different but similar entities, e.g., the entity above with his son George W. Bush. This means that both the context and structure of NEs must be taken into account in the IR model. SM becomes even more complex when attempting to match the shared semantics between two larger pieces of text, e.g., phrases or clauses, as there is currently no theory indicating how words should be semantically composed for deriving the meaning of text. The complexity above has traditionally led to define IR models based on bag-of-word representations in the vector space model (VSM), where (i) the necessary structure is minimally taken into account by considering n-grams or phrases; and (ii) the matching coverage is increased by projecting text in latent semantic spaces or alternatively by applying query expansion. Such methods introduce a considerable amount of noise, which negatively balances the benefit of achieving better coverage in most cases, thus producing no IR system improvement. In the last decade, a new class of semantic matching approaches based on the so-called Kernel Methods (KMs) for structured data (see e.g., [4]) have been proposed. KMs also adopt scalar products (which, in this context, take the names of kernel functions) in VSM. However, KMs introduce two new important aspects: • the scalar product is implicitly computed using smart techniques, which enable the use of huge feature spaces, e.g., all possible skip n-grams; and • KMs are typically applied within supervised algorithms, e.g., SVMs, which, exploiting training data, can filter out irrelevant features and noise. In this talk, we will briefly introduce and summarize, the latest results on kernel methods for semantic matching by focusing on structural kernels. These can be applied to match syntactic and/or semantic representations of text shaped as trees. Several variants are available: the Syntactic Tree Kernels (STK), [2], the String Kernels (SK) [5] and the Partial Tree Kernels (PTK) [4]. Most interestingly, we will present tree kernels exploiting SM between words contained in a text structure, i.e., the Syntactic Semantic Tree Kernels (SSTK) [1] and the Smoothed Partial Tree Kernels (SPTK) [3]. These extend STK and PTK by allowing for soft matching (i.e., via similarity computation) between nodes associated with different but related labels, e.g., synonyms. The node similarity can be derived from manually annotated resources, e.g., WordNet or Wikipedia, as well as using corpus-based clustering approaches, e.g., latent semantic analysis (LSA). An example of the use of such kernels for question classification in the question answering domain will illustrate the potentials of their structural similarity approach.",
"title": ""
},
{
"docid": "5221c87f7ee877a0a7ac0a972df4636d",
"text": "These are exciting times for medical image processing. Innovations in deep learning and the increasing availability of large annotated medical image datasets are leading to dramatic advances in automated understanding of medical images. From this perspective, I give a personal view of how computer-aided diagnosis of medical images has evolved and how the latest advances are leading to dramatic improvements today. I discuss the impact of deep learning on automated disease detection and organ and lesion segmentation, with particular attention to applications in diagnostic radiology. I provide some examples of how time-intensive and expensive manual annotation of huge medical image datasets by experts can be sidestepped by using weakly supervised learning from routine clinically generated medical reports. Finally, I identify the remaining knowledge gaps that must be overcome to achieve clinician-level performance of automated medical image processing systems. Computer-aided diagnosis (CAD) in medical imaging has flourished over the past several decades. New advances in computer software and hardware and improved quality of images from scanners have enabled this progress. The main motivations for CAD have been to reduce error and to enable more efficient measurement and interpretation of images. From this perspective, I will describe how deep learning has led to radical changes in howCAD research is conducted and in howwell it performs. For brevity, I will include automated disease detection and image processing under the rubric of CAD. Financial Disclosure The author receives patent royalties from iCAD Medical. Disclaimer No NIH endorsement of any product or company mentioned in this manuscript should be inferred. The opinions expressed herein are the author’s and do not necessarily represent those of NIH. R.M. Summers (B) Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg. 10, Room 1C224D MSC 1182, Bethesda, MD 20892-1182, USA e-mail: rms@nih.gov URL: http://www.cc.nih.gov/about/SeniorStaff/ronald_summers.html © Springer International Publishing Switzerland 2017 L. Lu et al. (eds.), Deep Learning and Convolutional Neural Networks for Medical Image Computing, Advances in Computer Vision and Pattern Recognition, DOI 10.1007/978-3-319-42999-1_1 3",
"title": ""
},
{
"docid": "b29f2d688e541463b80006fac19eaf20",
"text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.",
"title": ""
},
{
"docid": "4a82d4d17d7e51269270b52009e439c7",
"text": "Based on the Field Theory, this study postulates that the cognitive processes involved in making decisions to share information on social media platforms could be dynamically affected by network features and the contextual environment. The field effect is exerted by the reach and richness of network features, which virtually form a psychological pressure on one’s perception of the sharing situation. A research model is developed, in which the effects of extrinsic and intrinsic motivators on information-sharing continuance are moderated by the network features of social media platforms. A global sample from content contributors in two major social media platform contexts, experience-socialization (ES) platforms (N = 568) and intelligence-proliferation (IP) platforms (N = 653), were collected through the participatory research method. By using partial least-square analysis, the moderating effects of network features on cognitive-sharing processes under the two contexts were confirmed. For contributors on ES platforms, network features negatively moderate community identification and perceived enjoyment toward information sharing. By contrast, for contributors on IP platforms, network features negatively moderate the effects of perceived usefulness and altruistic tendencies on their intention to share, but self-efficacy for sharing is positively induced. The conceptualization of network features and refined knowledge about the situational and contextual effects of social media platforms are useful for further studies on social behaviors and may ultimately benefit platform providers in their attempts to promote information-sharing continuance. ã 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9140faa8bd908e5c8d0d9b326f07e231",
"text": "The purpose of this paper is to provide a preliminary report on the rst broad-based experimental comparison of modern heuristics for the asymmetric traveling salesmen problem (ATSP). There are currently three general classes of such heuristics: classical tour construction heuristics such as Nearest Neighbor and the Greedy algorithm, local search algorithms based on re-arranging segments of the tour, as exemplied by the Kanellakis-Papadimitriou algorithm [KP80], and algorithms based on patching together the cycles in a minimum cycle cover, the best of which are variants on an algorithm proposed by Zhang [Zha93]. We test implementations of the main contenders from each class on a variety of instance types, introducing a variety of new random instance generators modeled on real-world applications of the ATSP. Among the many tentative conclusions we reach is that no single algorithm is dominant over all instance classes, although for each class the best tours are found either by Zhang's algorithm or an iterated variant on KanellakisPapadimitriou.",
"title": ""
},
{
"docid": "ebd4901b9352f98f879c27f50e999ef1",
"text": "This paper describes a probabilistic approach to global localization within an in-door environment with minimum infrastructure requirements. Global localization is a flavor of localization in which the device is unaware of its initial position and has to determine the same from scratch. Localization is performed based on the received signal strength indication (RSSI) as the only sensor reading, which is provided by most off-the-shelf wireless network interface cards. Location and orientation estimates are computed using Bayesian filtering on a sample set derived using Monte-Carlo sampling. Research leading to the proposed method is outlined along with results and conclusions from simulations and real life experiments.",
"title": ""
},
{
"docid": "2c7361a6a5b949229edbf9f4fd0fe529",
"text": "We propose a novel mechanism to infer topics of interest of individual users in the Twitter social network. We observe that in Twitter, a user generally follows experts on various topics of her interest in order to acquire information on those topics. We use a methodology based on social annotations (proposed earlier by us) to first deduce the topical expertise of popular Twitter users, and then transitively infer the interests of the users who follow them. This methodology is a sharp departure from the traditional techniques of inferring interests of a user from the tweets that she posts or receives. We show that the topics of interest inferred by the proposed methodology are far superior than the topics extracted by state-of-the-art techniques such as using topic models (Labeled LDA) on tweets. Based upon the proposed methodology, we build a system Who Likes What, which can infer the interests of millions of Twitter users. To our knowledge, this is the first system that can infer interests for Twitter users at such scale. Hence, this system would be particularly beneficial in developing personalized recommender services over the Twitter platform.",
"title": ""
},
{
"docid": "2708052c26111d54ba2c235afa26f71f",
"text": "Reinforcement Learning (RL) has been an interesting research area in Machine Learning and AI. Hierarchical Reinforcement Learning (HRL) that decomposes the RL problem into sub-problems where solving each of which will be more powerful than solving the entire problem will be our concern in this paper. A review of the state-of-the-art of HRL has been investigated. Different HRL-based domains have been highlighted. Different problems in such different domains along with some proposed solutions have been addressed. It has been observed that HRL has not yet been surveyed in the current existing research; the reason that motivated us to work on this paper. Concluding remarks are presented. Some ideas have been emerged during the work on this research and have been proposed for pursuing a future research.",
"title": ""
},
{
"docid": "a3cfab5203348546d901e18ab4cc7c3a",
"text": "Most of neural language models use different kinds of embeddings for word prediction. While word embeddings can be associated to each word in the vocabulary or derived from characters as well as factored morphological decomposition, these word representations are mainly used to parametrize the input, i.e. the context of prediction. This work investigates the effect of using subword units (character and factored morphological decomposition) to build output representations for neural language modeling. We present a case study on Czech, a morphologically-rich language, experimenting with different input and output representations. When working with the full training vocabulary, despite unstable training, our experiments show that augmenting the output word representations with character-based embeddings can significantly improve the performance of the model. Moreover, reducing the size of the output look-up table, to let the character-based embeddings represent rare words, brings further improvement.",
"title": ""
},
{
"docid": "0dfba09dc9a01e4ebca16eb5688c81aa",
"text": "Machine-to-Machine (M2M) refers to technologies with various applications. In order to provide the vision and goals of M2M, an M2M ecosystem with a service platform must be established by the key players in industrial domains so as to substantially reduce development costs and improve time to market of M2M devices and services. The service platform must be supported by M2M enabling technologies and standardization. In this paper, we present a survey of existing M2M service platforms and explore the various research issues and challenges involved in enabling an M2M service platform. We first classify M2M nodes according to their characteristics and required functions, and we then highlight the features of M2M traffic. With these in mind, we discuss the necessity of M2M platforms. By comparing and analyzing the existing approaches and solutions of M2M platforms, we identify the requirements and functionalities of the ideal M2M service platform. Based on these, we propose an M2M service platform (M2SP) architecture and its functionalities, and present the M2M ecosystem with this platform. Different application scenarios are given to illustrate the interaction between the components of the proposed platform. In addition, we discuss the issues and challenges of enabling technologies and standardization activities, and outline future research directions for the M2M network.",
"title": ""
},
{
"docid": "bbd44633e14d9ac1d8e54839cbdf5150",
"text": "This study aimed to examine social media addiction in a sample of university students. Based on the Internet addiction scale developed by Young (1996) the researcher used cross-sectional survey methodology in which a questionnaire was distributed to 1327 undergraduate students with their consent. Factor analysis of the self-report data showed that social media addiction has three independent dimensions. These dimensions were positively related to the users experience with social media; time spent using social media and satisfaction with them. In addition, social media addiction was a negative predictor of academic performance as measured by a student's GPA. Future studies should consider the cultural values of users and examine the context of social media usage.",
"title": ""
},
{
"docid": "4ce9966e6cd081f92a56244cadf24b93",
"text": "User ratings are the essence of recommender systems in e-commerce. Lack of motivation to provide ratings and eligibility to rate generally only after purchase restrain the effectiveness of such systems and contribute to the well-known data sparsity and cold start problems. This article proposes a new information source for recommender systems, called prior ratings. Prior ratings are based on users’ experiences of virtual products in a mediated environment, and they can be submitted prior to purchase. A conceptual model of prior ratings is proposed, integrating the environmental factor presence whose effects on product evaluation have not been studied previously. A user study conducted in website and virtual store modalities demonstrates the validity of the conceptual model, in that users are more willing and confident to provide prior ratings in virtual environments. A method is proposed to show how to leverage prior ratings in collaborative filtering. Experimental results indicate the effectiveness of prior ratings in improving predictive performance. 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] | scidocsrr |
2573b2c0bf97517508e4e11f6a91a414 | DeSTNet: Densely Fused Spatial Transformer Networks | [
{
"docid": "b16992ec2416b420b2115037c78cfd4b",
"text": "Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT. We introduce a deep scattering convolution network, with complex wavelet filters over spatial and angular variables. This representation brings an important improvement to results previously obtained with predefined features over object image databases such as Caltech and CIFAR. The resulting accuracy is comparable to results obtained with unsupervised deep learning and dictionary based representations. This shows that refining image representations by using geometric priors is a promising direction to improve image classification and its understanding.",
"title": ""
}
] | [
{
"docid": "1a7c72a1353e7983c5b55c82be70488d",
"text": "education Ph.D. candidate, EECS, University of California, Berkeley, Spring 2019 (Expected). Advised by Prof. Benjamin Recht. S.M., EECS, Massachusetts Institute of Technology, Spring 2014. Advised by Prof. Samuel Madden. Thesis: Fast Transactions for Multicore In-Memory Databases. B.A., Computer Science, University of California, Berkeley, Fall 2010. B.S., Mechanical Engineering, University of California, Berkeley, Fall 2010.",
"title": ""
},
{
"docid": "71e275e9bb796bda3279820bfdd1dafb",
"text": "Alex M. Brooks Doctor of Philosophy The University of Sydney January 2007 Parametric POMDPs for Planning in Continuous State Spaces This thesis is concerned with planning and acting under uncertainty in partially-observable continuous domains. In particular, it focusses on the problem of mobile robot navigation given a known map. The dominant paradigm for robot localisation is to use Bayesian estimation to maintain a probability distribution over possible robot poses. In contrast, control algorithms often base their decisions on the assumption that a single state, such as the mode of this distribution, is correct. In scenarios involving significant uncertainty, this can lead to serious control errors. It is generally agreed that the reliability of navigation in uncertain environments would be greatly improved by the ability to consider the entire distribution when acting, rather than the single most likely state. The framework adopted in this thesis for modelling navigation problems mathematically is the Partially Observable Markov Decision Process (POMDP). An exact solution to a POMDP problem provides the optimal balance between reward-seeking behaviour and information-seeking behaviour, in the presence of sensor and actuation noise. Unfortunately, previous exact and approximate solution methods have had difficulty scaling to real applications. The contribution of this thesis is the formulation of an approach to planning in the space of continuous parameterised approximations to probability distributions. Theoretical and practical results are presented which show that, when compared with similar methods from the literature, this approach is capable of scaling to larger and more realistic problems. In order to apply the solution algorithm to real-world problems, a number of novel improvements are proposed. Specifically, Monte Carlo methods are employed to estimate distributions over future parameterised beliefs, improving planning accuracy without a loss of efficiency. Conditional independence assumptions are exploited to simplify the problem, reducing computational requirements. Scalability is further increased by focussing computation on likely beliefs, using metric indexing structures for efficient function approximation. Local online planning is incorporated to assist global offline planning, allowing the precision of the latter to be decreased without adversely affecting solution quality. Finally, the algorithm is implemented and demonstrated during real-time control of a mobile robot in a challenging navigation task. We argue that this task is substantially more challenging and realistic than previous problems to which POMDP solution methods have been applied. Results show that POMDP planning, which considers the evolution of the entire probability distribution over robot poses, produces significantly more robust behaviour when compared with a heuristic planner which considers only the most likely states and outcomes.",
"title": ""
},
{
"docid": "0b7f00dcdfdd1fe002b2363097914bba",
"text": "A new field of research, visual analytics, has been introduced. This has been defined as \"the science of analytical reasoning facilitated by interactive visual interfaces\" (Thomas and Cook, 2005). Visual analytic environments, therefore, support analytical reasoning using visual representations and interactions, with data representations and transformation capabilities, to support production, presentation, and dissemination. As researchers begin to develop visual analytic environments, it is advantageous to develop metrics and methodologies to help researchers measure the progress of their work and understand the impact their work has on the users who work in such environments. This paper presents five areas or aspects of visual analytic environments that should be considered as metrics and methodologies for evaluation are developed. Evaluation aspects need to include usability, but it is necessary to go beyond basic usability. The areas of situation awareness, collaboration, interaction, creativity, and utility are proposed as the five evaluation areas for initial consideration. The steps that need to be undertaken to develop systematic evaluation methodologies and metrics for visual analytic environments are outlined",
"title": ""
},
{
"docid": "60f6e3345aae1f91acb187ba698f073b",
"text": "A Cube-Satellite (CubeSat) is a small satellite weighing no more than one kilogram. CubeSats are used for space research, but their low-rate communication capability limits functionality. As greater payload and instrumentation functions are sought, increased data rate is needed. Since most CubeSats currently transmit at a 437 MHz frequency, several directional antenna types were studied for a 2.45 GHz, larger bandwidth transmission. This higher frequency provides the bandwidth needed for increasing the data rate. A deployable antenna mechanism maybe needed because most directional antennas are bigger than the CubeSat size constraints. From the study, a deployable hemispherical helical antenna prototype was built. Transmission between two prototype antenna equipped transceivers at varying distances tested the helical performance. When comparing the prototype antenna's maximum transmission distance to the other commercial antennas, the prototype outperformed all commercial antennas, except the patch antenna. The root cause was due to the helical antenna's narrow beam width. Future work can be done in attaining a more accurate alignment with the satellite's directional antenna to downlink with a terrestrial ground station.",
"title": ""
},
{
"docid": "14b6b544144d6c14cb283fd0ac8308d8",
"text": "Disrupted daily or circadian rhythms of lung function and inflammatory responses are common features of chronic airway diseases. At the molecular level these circadian rhythms depend on the activity of an autoregulatory feedback loop oscillator of clock gene transcription factors, including the BMAL1:CLOCK activator complex and the repressors PERIOD and CRYPTOCHROME. The key nuclear receptors and transcription factors REV-ERBα and RORα regulate Bmal1 expression and provide stability to the oscillator. Circadian clock dysfunction is implicated in both immune and inflammatory responses to environmental, inflammatory, and infectious agents. Molecular clock function is altered by exposomes, tobacco smoke, lipopolysaccharide, hyperoxia, allergens, bleomycin, as well as bacterial and viral infections. The deacetylase Sirtuin 1 (SIRT1) regulates the timing of the clock through acetylation of BMAL1 and PER2 and controls the clock-dependent functions, which can also be affected by environmental stressors. Environmental agents and redox modulation may alter the levels of REV-ERBα and RORα in lung tissue in association with a heightened DNA damage response, cellular senescence, and inflammation. A reciprocal relationship exists between the molecular clock and immune/inflammatory responses in the lungs. Molecular clock function in lung cells may be used as a biomarker of disease severity and exacerbations or for assessing the efficacy of chronotherapy for disease management. Here, we provide a comprehensive overview of clock-controlled cellular and molecular functions in the lungs and highlight the repercussions of clock disruption on the pathophysiology of chronic airway diseases and their exacerbations. Furthermore, we highlight the potential for the molecular clock as a novel chronopharmacological target for the management of lung pathophysiology.",
"title": ""
},
{
"docid": "067eca04f9a60ae7cc4b77faa478ab22",
"text": "The E. coli cytosine deaminase (CD) provides a negative selection system for suicide gene therapy as CD transfectants are eliminated following 5-fluorocytosine (5FC) treatment. Here we report a positive selection system for the CD gene using 5-fluorouracil (5FU) and cytosine in selection medium to screen for CD-positive transfectants. It is based on the relief of 5FU toxicity by uracil which is converted from cytosine via CD catalysis, as uracil competes with the toxic 5FU in subsequent pyrimidine metabolism. Hence, a retroviral vector containing the CD gene may pro- vide both positive and negative selections after gene transfer. The CD transfectants selected with the positive selection system showed susceptibility to 5FC in subsequent negative selection in vitro and in vivo. Therefore, this dual selection system is useful not only for combination therapy with transgene and CD gene, but can also act to eliminate selectively transduced cells after the transgene has furnished its effects or upon undesired conditions if 5FC is applied for negative selection in vivo.",
"title": ""
},
{
"docid": "5e0fe6fb7c9b088540d571cea266d61e",
"text": "With the rapid prevalence of smart mobile devices, the number of mobile Apps available has exploded over the past few years. To facilitate the choice of mobile Apps, existing mobile App recommender systems typically recommend popular mobile Apps to mobile users. However, mobile Apps are highly varied and often poorly understood, particularly for their activities and functions related to privacy and security. Therefore, more and more mobile users are reluctant to adopt mobile Apps due to the risk of privacy invasion and other security concerns. To fill this crucial void, in this paper, we propose to develop a mobile App recommender system with privacy and security awareness. The design goal is to equip the recommender system with the functionality which allows to automatically detect and evaluate the security risk of mobile Apps. Then, the recommender system can provide App recommendations by considering both the Apps' popularity and the users' security preferences. Specifically, a mobile App can lead to security risk because insecure data access permissions have been implemented in this App. Therefore, we first develop the techniques to automatically detect the potential security risk for each mobile App by exploiting the requested permissions. Then, we propose a flexible approach based on modern portfolio theory for recommending Apps by striking a balance between the Apps' popularity and the users' security concerns, and build an App hash tree to efficiently recommend Apps. Finally, we evaluate our approach with extensive experiments on a large-scale data set collected from Google Play. The experimental results clearly validate the effectiveness of our approach.",
"title": ""
},
{
"docid": "2d151d6dcefa227e6ea90637c3f220dd",
"text": "A wide range of approximate methods has been historically proposed for performance-based assessment of frame buildings in the aftermath of an earthquake. Most of these methods typically require a detailed analytical model representation of the respective building in order to assess its seismic vulnerability and post-earthquake functionality. This paper proposes an approximate method for estimating story-based engineering demand parameters (EDPs) such as peak story drift ratios, peak floor absolute accelerations, and residual story drift ratios in steel frame buildings with steel moment-resisting frames (MRFs). The proposed method is based on concepts from structural health monitoring, which does not require the use of detailed analytical models for structural and non-structural damage diagnosis. The proposed method is able to compute story-based EDPs in steel frame buildings with MRFs with reasonable accuracy. Such EDPs can facilitate damage assessment/control as well as building-specific seismic loss assessment. The proposed method is utilized to assess the extent of structural damage in an instrumented steel frame building that experienced the 1994 Northridge earthquake.",
"title": ""
},
{
"docid": "f53d8be1ec89cb8a323388496d45dcd0",
"text": "While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.",
"title": ""
},
{
"docid": "69039983940e885fb261107d78edc258",
"text": "Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The first set of tools permits the seamless importation of both opaque and transparent source image regions into a destination region. The second set is based on similar mathematical ideas and allows the user to modify the appearance of the image seamlessly, within a selected region. These changes can be arranged to affect the texture, the illumination, and the color of objects lying in the region, or to make tileable a rectangular selection.",
"title": ""
},
{
"docid": "b1b2a83d67456c0f0bf54092cbb06e65",
"text": "The transmission of voice communications as datagram packets over IP networks, commonly known as voice-over-IP (VoIP) telephony, is rapidly gaining wide acceptance. With private phone conversations being conducted on insecure public networks, security of VoIP communications is increasingly important. We present a structured security analysis of the VoIP protocol stack, which consists of signaling (SIP), session description (SDP), key establishment (SDES, MIKEY, and ZRTP) and secure media transport (SRTP) protocols. Using a combination of manual and tool-supported formal analysis, we uncover several design flaws and attacks, most of which are caused by subtle inconsistencies between the assumptions that protocols at different layers of the VoIP stack make about each other. The most serious attack is a replay attack on SDES, which causes SRTP to repeat the keystream used for media encryption, thus completely breaking transport-layer security. We also demonstrate a man-in-the-middle attack on ZRTP, which allows the attacker to convince the communicating parties that they have lost their shared secret. If they are using VoIP devices without displays and thus cannot execute the \"human authentication\" procedure, they are forced to communicate insecurely, or not communicate at all, i.e., this becomes a denial of service attack. Finally, we show that the key derivation process used in MIKEY cannot be used to prove security of the derived key in the standard cryptographic model for secure key exchange.",
"title": ""
},
{
"docid": "46004ee1f126c8a5b76166c5dc081bc8",
"text": "In this study, an energy harvesting chip was developed to scavenge energy from artificial light to charge a wireless sensor node. The chip core is a miniature transformer with a nano-ferrofluid magnetic core. The chip embedded transformer can convert harvested energy from its solar cell to variable voltage output for driving multiple loads. This chip system yields a simple, small, and more importantly, a battery-less power supply solution. The sensor node is equipped with multiple sensors that can be enabled by the energy harvesting power supply to collect information about the human body comfort degree. Compared with lab instruments, the nodes with temperature, humidity and photosensors driven by harvested energy had variation coefficient measurement precision of less than 6% deviation under low environmental light of 240 lux. The thermal comfort was affected by the air speed. A flow sensor equipped on the sensor node was used to detect airflow speed. Due to its high power consumption, this sensor node provided 15% less accuracy than the instruments, but it still can meet the requirement of analysis for predicted mean votes (PMV) measurement. The energy harvesting wireless sensor network (WSN) was deployed in a 24-hour convenience store to detect thermal comfort degree from the air conditioning control. During one year operation, the sensor network powered by the energy harvesting chip retained normal functions to collect the PMV index of the store. According to the one month statistics of communication status, the packet loss rate (PLR) is 2.3%, which is as good as the presented results of those WSNs powered by battery. Referring to the electric power records, almost 54% energy can be saved by the feedback control of an energy harvesting sensor network. These results illustrate that, scavenging energy not only creates a reliable power source for electronic devices, such as wireless sensor nodes, but can also be an energy source by building an energy efficient program.",
"title": ""
},
{
"docid": "3a29bbe76a53c8284123019eba7e0342",
"text": "Although von Ammon' first used the term blepharphimosis in 1841, it was Vignes2 in 1889 who first associated blepharophimosis with ptosis and epicanthus inversus. In 1921, Dimitry3 reported a family in which there were 21 affected subjects in five generations. He described them as having ptosis alone and did not specify any other features, although photographs in the report show that they probably had the full syndrome. Dimitry's pedigree was updated by Owens et a/ in 1960. The syndrome appeared in both sexes and was transmitted as a Mendelian dominant. In 1935, Usher5 reviewed the reported cases. By then, 26 pedigrees had been published with a total of 175 affected persons with transmission mainly through affected males. There was no consanguinity in any pedigree. In three pedigrees, parents who obviously carried the gene were unaffected. Well over 150 families have now been reported and there is no doubt about the autosomal dominant pattern of inheritance. However, like Usher,5 several authors have noted that transmission is mainly through affected males and less commonly through affected females.4 6 Reports by Moraine et al7 and Townes and Muechler8 have described families where all affected females were either infertile with primary or secondary amenorrhoea or had menstrual irregularity. Zlotogora et a/9 described one family and analysed 38 families reported previously. They proposed the existence of two types: type I, the more common type, in which the syndrome is transmitted by males only and affected females are infertile, and type II, which is transmitted by both affected females and males. There is male to male transmission in both types and both are inherited as an autosomal dominant trait. They found complete penetrance in type I and slightly reduced penetrance in type II.",
"title": ""
},
{
"docid": "3b5dcd12c1074100ffede33c8b3a680c",
"text": "This paper proposes a two-stream flow-guided convolutional attention networks for action recognition in videos. The central idea is that optical flows, when properly compensated for the camera motion, can be used to guide attention to the human foreground. We thus develop crosslink layers from the temporal network (trained on flows) to the spatial network (trained on RGB frames). These crosslink layers guide the spatial-stream to pay more attention to the human foreground areas and be less affected by background clutter. We obtain promising performances with our approach on the UCF101, HMDB51 and Hollywood2 datasets.",
"title": ""
},
{
"docid": "747f56b1b03fdb77042597f2f44730d6",
"text": "We introduce KBGAN, an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models. Because knowledge graphs typically only contain positive facts, sampling useful negative training examples is a nontrivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts, but the majority of the generated negative facts can be easily discriminated from positive facts, and will contribute little towards the training. Inspired by generative adversarial networks (GANs), we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model, which acts as the discriminator in GANs. This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks. In experiments, we adversarially train two translation-based models, TRANSE and TRANSD, each with assistance from one of the two probability-based models, DISTMULT and COMPLEX. We evaluate the performances of KBGAN on the link prediction task, using three knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental results show that adversarial training substantially improves the performances of target embedding models under various settings.",
"title": ""
},
{
"docid": "87aef15dc90a8981bda3fcc5b8045d7c",
"text": "Human groups show structured levels of genetic similarity as a consequence of factors such as geographical subdivision and genetic drift. Surveying this structure gives us a scientific perspective on human origins, sheds light on evolutionary processes that shape both human adaptation and disease, and is integral to effectively carrying out the mission of global medical genetics and personalized medicine. Surveys of population structure have been ongoing for decades, but in the past three years, single-nucleotide-polymorphism (SNP) array technology has provided unprecedented detail on human population structure at global and regional scales. These studies have confirmed well-known relationships between distantly related populations and uncovered previously unresolvable relationships among closely related human groups. SNPs represent the first dense genome-wide markers, and as such, their analysis has raised many challenges and insights relevant to the study of population genetics with whole-genome sequences. Here we draw on the lessons from these studies to anticipate the directions that will be most fruitful to pursue during the emerging whole-genome sequencing era.",
"title": ""
},
{
"docid": "c4a2b13eb9d8d9840ff246e02b02f85f",
"text": "In this paper, we study the problem of designing efficient convolutional neural network architectures with the interest in eliminating the redundancy in convolution kernels. In addition to structured sparse kernels, low-rank kernels and the product of low-rank kernels, the product of structured sparse kernels, which is a framework for interpreting the recently-developed interleaved group convolutions (IGC) and its variants (e.g., Xception), has been attracting increasing interests. Motivated by the observation that the convolutions contained in a group convolution in IGC can be further decomposed in the same manner, we present a modularized building block, IGC-V2: interleaved structured sparse convolutions. It generalizes interleaved group convolutions, which is composed of two structured sparse kernels, to the product of more structured sparse kernels, further eliminating the redundancy. We present the complementary condition and the balance condition to guide the design of structured sparse kernels, obtaining a balance among three aspects: model size, computation complexity and classification accuracy. Experimental results demonstrate the advantage on the balance among these three aspects compared to interleaved group convolutions and Xception, and competitive performance compared to other state-of-the-art architecture design methods.",
"title": ""
},
{
"docid": "23d9479a38afa6e8061fe431047bed4e",
"text": "We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations—at the senders, recipients and mixnodes—thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.",
"title": ""
},
{
"docid": "ef5cfd6c5eaf48805e39a9eb454aa7b9",
"text": "Neural networks are artificial learning systems. For more than two decades, they have help for detecting hostile behaviors in a computer system. This review describes those systems and theirs limits. It defines and gives neural networks characteristics. It also itemizes neural networks which are used in intrusion detection systems. The state of the art on IDS made from neural networks is reviewed. In this paper, we also make a taxonomy and a comparison of neural networks intrusion detection systems. We end this review with a set of remarks and future works that can be done in order to improve the systems that have been presented. This work is the result of a meticulous scan of the literature.",
"title": ""
},
{
"docid": "c346ddfd1247d335c1a45d094ae2bb60",
"text": "In this paper we introduce a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360°. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups could show that it is well compliant with real-time constraint, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.",
"title": ""
}
] | scidocsrr |
eade12f9ddcc19e505e9e3b57398c1a0 | Curiosity and exploration: facilitating positive subjective experiences and personal growth opportunities. | [
{
"docid": "971e39e4b99695f249ec1d367b5044f0",
"text": "Research on curiosity has undergone 2 waves of intense activity. The 1st, in the 1960s, focused mainly on curiosity's psychological underpinnings. The 2nd, in the 1970s and 1980s, was characterized by attempts to measure curiosity and assess its dimensionality. This article reviews these contributions with a concentration on the 1st wave. It is argued that theoretical accounts of curiosity proposed during the 1st period fell short in 2 areas: They did not offer an adequate explanation for why people voluntarily seek out curiosity, and they failed to delineate situational determinants of curiosity. Furthermore, these accounts did not draw attention to, and thus did not explain, certain salient characteristics of curiosity: its intensity, transience, association with impulsivity, and tendency to disappoint when satisfied. A new account of curiosity is offered that attempts to address these shortcomings. The new account interprets curiosity as a form of cognitively induced deprivation that arises from the perception of a gap in knowledge or understanding.",
"title": ""
}
] | [
{
"docid": "f0958d2c952c7140c998fa13a2bf4374",
"text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.",
"title": ""
},
{
"docid": "ea6cb11966919ff9ef331766974aa4c7",
"text": "Verifiable secret sharing is an important primitive in distributed cryptography. With the growing interest in the deployment of threshold cryptosystems in practice, the traditional assumption of a synchronous network has to be reconsidered and generalized to an asynchronous model. This paper proposes the first practical verifiable secret sharing protocol for asynchronous networks. The protocol creates a discrete logarithm-based sharing and uses only a quadratic number of messages in the number of participating servers. It yields the first asynchronous Byzantine agreement protocol in the standard model whose efficiency makes it suitable for use in practice. Proactive cryptosystems are another important application of verifiable secret sharing. The second part of this paper introduces proactive cryptosystems in asynchronous networks and presents an efficient protocol for refreshing the shares of a secret key for discrete logarithm-based sharings.",
"title": ""
},
{
"docid": "5fde0c312b9ab2aada7a04a5dffc1d76",
"text": "A security metric measures or assesses the extent to which a system meets its security objectives. Since meaningful quantitative security metrics are largely unavailable, the security community primarily uses qualitative metrics for security. In this paper, we present a novel quantitative metric for the security of computer networks that is based on an analysis of attack graphs. The metric measures the security strength of a network in terms of the strength of the weakest adversary who can successfully penetrate the network. We present an algorithm that computes the minimal sets of required initial attributes for the weakest adversary to possess in order to successfully compromise a network; given a specific network configuration, set of known exploits, a specific goal state, and an attacker class (represented by a set of all initial attacker attributes). We also demonstrate, by example, that diverse network configurations are not always beneficial for network security in terms of penetrability.",
"title": ""
},
{
"docid": "8fda8068ce2cc06b3bcdf06b7e761ca0",
"text": "Image forensics has attracted wide attention during the past decade. However, most existing works aim at detecting a certain operation, which means that their proposed features usually depend on the investigated image operation and they consider only binary classification. This usually leads to misleading results if irrelevant features and/or classifiers are used. For instance, a JPEG decompressed image would be classified as an original or median filtered image if it was fed into a median filtering detector. Hence, it is important to develop forensic methods and universal features that can simultaneously identify multiple image operations. Based on extensive experiments and analysis, we find that any image operation, including existing anti-forensics operations, will inevitably modify a large number of pixel values in the original images. Thus, some common inherent statistics such as the correlations among adjacent pixels cannot be preserved well. To detect such modifications, we try to analyze the properties of local pixels within the image in the residual domain rather than the spatial domain considering the complexity of the image contents. Inspired by image steganalytic methods, we propose a very compact universal feature set and then design a multiclass classification scheme for identifying many common image operations. In our experiments, we tested the proposed features as well as several existing features on 11 typical image processing operations and four kinds of anti-forensic methods. The experimental results show that the proposed strategy significantly outperforms the existing forensic methods in terms of both effectiveness and universality.",
"title": ""
},
{
"docid": "0632f4a3119246ee9cd7b858dc0c3ed4",
"text": "AIM\nIn order to improve the patients' comfort and well-being during and after a stay in the intensive care unit (ICU), the patients' perspective on the intensive care experience in terms of memories is essential. The aim of this study was to describe unpleasant and pleasant memories of the ICU stay in adult mechanically ventilated patients.\n\n\nMETHOD\nMechanically ventilated adults admitted for more than 24hours from two Swedish general ICUs were included and interviewed 5 days after ICU discharge using two open-ended questions. The data were analysed exploring the manifest content.\n\n\nFINDINGS\nOf the 250 patients interviewed, 81% remembered the ICU stay, 71% described unpleasant memories and 59% pleasant. Ten categories emerged from the content analyses (five from unpleasant and five from pleasant memories), contrasting with each other: physical distress and relief of physical distress, emotional distress and emotional well-being, perceptual distress and perceptual well-being, environmental distress and environmental comfort, and stress-inducing care and caring service.\n\n\nCONCLUSION\nMost critical care patients have both unpleasant and pleasant memories of their ICU stay. Pleasant memories such as support and caring service are important to relief the stress and may balance the impact of the distressing memories of the ICU stay.",
"title": ""
},
{
"docid": "cec046aa647ece5f9449c470c6c6edcf",
"text": "In this article we survey ambient intelligence (AmI), including its applications, some of the technologies it uses, and its social and ethical implications. The applications include AmI at home, care of the elderly, healthcare, commerce, and business, recommender systems, museums and tourist scenarios, and group decision making. Among technologies, we focus on ambient data management and artificial intelligence; for example planning, learning, event-condition-action rules, temporal reasoning, and agent-oriented technologies. The survey is not intended to be exhaustive, but to convey a broad range of applications, technologies, and technical, social, and ethical challenges.",
"title": ""
},
{
"docid": "2282af5c9f4de5e0de2aae14c0a47840",
"text": "The penetration of smart devices such as mobile phones, tabs has significantly changed the way people communicate. This has led to the growth of usage of social media tools such as twitter, facebook chats for communication. This has led to development of new challenges and perspectives in the language technologies research. Automatic processing of such texts requires us to develop new methodologies. Thus there is great need to develop various automatic systems such as information extraction, retrieval and summarization. Entity recognition is a very important sub task of Information extraction and finds its applications in information retrieval, machine translation and other higher Natural Language Processing (NLP) applications such as co-reference resolution. Some of the main issues in handling of such social media texts are i) Spelling errors ii) Abbreviated new language vocabulary such as “gr8” for great iii) use of symbols such as emoticons/emojis iv) use of meta tags and hash tags v) Code mixing. Entity recognition and extraction has gained increased attention in Indian research community. However there is no benchmark data available where all these systems could be compared on same data for respective languages in this new generation user generated text. Towards this we have organized the Code Mix Entity Extraction in social media text track for Indian languages (CMEE-IL) in the Forum for Information Retrieval Evaluation (FIRE). We present the overview of CMEE-IL 2016 track. This paper describes the corpus created for Hindi-English and Tamil-English. Here we also present overview of the approaches used by the participants. CCS Concepts • Computing methodologies ~ Artificial intelligence • Computing methodologies ~ Natural language processing • Information systems ~ Information extraction",
"title": ""
},
{
"docid": "8961d0bd4ba45849bd8fa5c53c0cfb1d",
"text": "SUMMARY\nThe program MODELTEST uses log likelihood scores to establish the model of DNA evolution that best fits the data.\n\n\nAVAILABILITY\nThe MODELTEST package, including the source code and some documentation is available at http://bioag.byu. edu/zoology/crandall_lab/modeltest.html.",
"title": ""
},
{
"docid": "9bf698b09e48aa25e1c9bf1fa7885641",
"text": "This paper presents a review of methods and techniques that have been proposed for the segmentation of magnetic resonance (MR) images of the brain, with a special emphasis on the segmentation of white matter lesions. First, artifacts affecting MR images (noise, partial volume effect, and shading artifact) are reviewed and methods that have been proposed to correct for these artifacts are discussed. Next, a taxonomy of generic segmentation algorithms is presented, categorized as region-based, edge-based, and classification algorithms. For each category, the applications proposed in the literature are subdivided into 2-D, 3-D, or multimodal approaches. In each case, tables listing authors, bibliographic references, and methods used have been compiled and are presented. This description of segmentation algorithms is followed by a section on techniques proposed specifically for the analysis of white matter lesions. Finally, a section is dedicated to a review and a comparison of validation methods proposed to assess the accuracy and the reliability of the results obtained with various segmentation algorithms.",
"title": ""
},
{
"docid": "f3a8bb3fdda39554dfd98b639eeba335",
"text": "Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans.",
"title": ""
},
{
"docid": "5d91cf986b61bf095c04b68da2bb83d3",
"text": "The adeno-associated virus (AAV) vector has been used in preclinical and clinical trials of gene therapy for central nervous system (CNS) diseases. One of the biggest challenges of effectively delivering AAV to the brain is to surmount the blood-brain barrier (BBB). Herein, we identified several potential BBB shuttle peptides that significantly enhanced AAV8 transduction in the brain after a systemic administration, the best of which was the THR peptide. The enhancement of AAV8 brain transduction by THR is dose-dependent, and neurons are the primary THR targets. Mechanism studies revealed that THR directly bound to the AAV8 virion, increasing its ability to cross the endothelial cell barrier. Further experiments showed that binding of THR to the AAV virion did not interfere with AAV8 infection biology, and that THR competitively blocked transferrin from binding to AAV8. Taken together, our results demonstrate, for the first time, that BBB shuttle peptides are able to directly interact with AAV and increase the ability of the AAV vectors to cross the BBB for transduction enhancement in the brain. These results will shed important light on the potential applications of BBB shuttle peptides for enhancing brain transduction with systemic administration of AAV vectors.",
"title": ""
},
{
"docid": "a33d982b4dde7c22ffc3c26214b35966",
"text": "Background: In most cases, bug resolution is a collaborative activity among developers in software development where each developer contributes his or her ideas on how to resolve the bug. Although only one developer is recorded as the actual fixer for the bug, the contribution of the developers who participated in the collaboration cannot be neglected.\n Aims: This paper proposes a new approach, called DRETOM (Developer REcommendation based on TOpic Models), to recommending developers for bug resolution in collaborative behavior.\n Method: The proposed approach models developers' interest in and expertise on bug resolving activities based on topic models that are built from their historical bug resolving records. Given a new bug report, DRETOM recommends a ranked list of developers who are potential to participate in and contribute to resolving the new bug according to these developers' interest in and expertise on resolving it.\n Results: Experimental results on Eclipse JDT and Mozilla Firefox projects show that DRETOM can achieve high recall up to 82% and 50% with top 5 and top 7 recommendations respectively.\n Conclusion: Developers' interest in bug resolving activities should be taken into consideration. On condition that the parameter θ of DRETOM is set properly with trials, the proposed approach is practically useful in terms of recall.",
"title": ""
},
{
"docid": "b02dcd4d78f87d8ac53414f0afd8604b",
"text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.",
"title": ""
},
{
"docid": "fd59754c40f05710496d3b9738f97e47",
"text": "The extent to which mental health consumers encounter stigma in their daily lives is a matter of substantial importance for their recovery and quality of life. This article summarizes the results of a nationwide survey of 1,301 mental health consumers concerning their experience of stigma and discrimination. Survey results and followup interviews with 100 respondents revealed experience of stigma from a variety of sources, including communities, families, churches, coworkers, and mental health caregivers. The majority of respondents tended to try to conceal their disorders and worried a great deal that others would find out about their psychiatric status and treat them unfavorably. They reported discouragement, hurt, anger, and lowered self-esteem as results of their experiences, and they urged public education as a means for reducing stigma. Some reported that involvement in advocacy and speaking out when stigma and discrimination were encountered helped them to cope with stigma. Limitations to generalization of results include the self-selection, relatively high functioning of participants, and respondent connections to a specific advocacy organization-the National Alliance for the Mentally Ill.",
"title": ""
},
{
"docid": "bf22279451c635543b583015d3681b7e",
"text": "A simple and compact microstrip-fed ultra wideband (UWB) printed monopole antenna with band-notched performance is proposed in this paper. The antenna is composed of a cirque ring with a small strip bar, so that the antenna occupies about 8.29 GHz bandwidth covering 3.18-11.47 GHz with expected band rejection of 5.09 GHz to 5.88 GHz. A quasi-omnidirectional and quasi-symmetrical radiation pattern is also obtained. This kind of band-notched UWB antenna requires no external filters and thus greatly simplifies the system design of UWB wireless communication.",
"title": ""
},
{
"docid": "4c03c0fc33f8941a7769644b5dfb62ef",
"text": "A multiband MIMO antenna for a 4G mobile terminal is proposed. The antenna structure consists of a multiband main antenna element, a printed inverted-L subantenna element operating in the higher 2.5 GHz bands, and a wideband loop sub-antenna element working in lower 0.9 GHz band. In order to improve the isolation and ECC characteristics of the proposed MIMO antenna, each element is located at a different corner of the ground plane. In addition, the inductive coils are employed to reduce the antenna volume and realize the wideband property of the loop sub-antenna element. Finally, the proposed antenna covers LTE band 7/8, PCS, WiMAX, and WLAN service, simultaneously. The MIMO antenna has ECC lower than 0.15 and isolation higher than 12 dB in both lower and higher frequency bands.",
"title": ""
},
{
"docid": "fca521f5e0b48d27d68f07dfc1641edb",
"text": "To compare cryo-EM images and 3D reconstructions with atomic structures in a quantitative way it is essential to model the electron scattering by solvent (water or ice) that surrounds protein assemblies. The most rigorous method for determining the density of solvating water atoms for this purpose has been to perform molecular-dynamics (MD) simulations of the protein-water system. In this paper we adapt the ideas of bulk-water modeling that are used in the refinement of X-ray crystal structures to the cryo-EM solvent-modeling problem. We present a continuum model for solvent density which matches MD-based results to within sampling errors. However, we also find that the simple binary-mask model of Jiang and Brünger (1994) performs nearly as well as the new model. We conclude that several methods are now available for rapid and accurate modeling of cryo-EM images and maps of solvated proteins.",
"title": ""
},
{
"docid": "8d581aef7779713f3cb9f236fb83d7ff",
"text": "Sandro Botticelli was one of the most esteemed painters and draughtsmen among Renaissance artists. Under the patronage of the De' Medici family, he was active in Florence during the flourishing of the Renaissance trend towards the reclamation of lost medical and anatomical knowledge of ancient times through the dissection of corpses. Combining the typical attributes of the elegant courtly style with hallmarks derived from the investigation and analysis of classical templates, he left us immortal masterpieces, the excellence of which incomprehensibly waned and was rediscovered only in the 1890s. Few know that it has already been reported that Botticelli concealed the image of a pair of lungs in his masterpiece, The Primavera. The present investigation provides evidence that Botticelli embedded anatomic imagery of the lung in another of his major paintings, namely, The Birth of Venus. Both canvases were most probably influenced and enlightened by the neoplatonic philosophy of the humanist teachings in the De' Medici's circle, and they represent an allegorical celebration of the cycle of life originally generated by the Divine Wind or Breath. This paper supports the theory that because of the anatomical knowledge to which he was exposed, Botticelli aimed to enhance the iconographical meaning of both the masterpieces by concealing images of the lung anatomy within them.",
"title": ""
},
{
"docid": "c0d2fcd6daeb433a5729a412828372f8",
"text": "Most 3D reconstruction approaches passively optimise over all data, exhaustively matching pairs, rather than actively selecting data to process. This is costly both in terms of time and computer resources, and quickly becomes intractable for large datasets. This work proposes an approach to intelligently filter large amounts of data for 3D reconstructions of unknown scenes using monocular cameras. Our contributions are twofold: First, we present a novel approach to efficiently optimise the Next-Best View (NBV) in terms of accuracy and coverage using partial scene geometry. Second, we extend this to intelligently selecting stereo pairs by jointly optimising the baseline and vergence to find the NBV’s best stereo pair to perform reconstruction. Both contributions are extremely efficient, taking 0.8ms and 0.3ms per pose, respectively. Experimental evaluation shows that the proposed method allows efficient selection of stereo pairs for reconstruction, such that a dense model can be obtained with only a small number of images. Once a complete model has been obtained, the remaining computational budget is used to intelligently refine areas of uncertainty, achieving results comparable to state-of-the-art batch approaches on the Middlebury dataset, using as little as 3.8% of the views.",
"title": ""
},
{
"docid": "770d48a87dd718d20ea00c16ba0ac530",
"text": "The purpose of this article is to describe emotion regulation, and how emotion regulation may be compromised in patients with autism spectrum disorder (ASD). This information may be useful for clinicians working with children with ASD who exhibit behavioral problems. Suggestions for practice are provided.",
"title": ""
}
] | scidocsrr |
3fa91b18b304566a526737057d5b115b | Attentional convolutional neural networks for object tracking | [
{
"docid": "d349cf385434027b4532080819d5745f",
"text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.",
"title": ""
},
{
"docid": "dacebd3415ec50ca6c74e28048fe6fc8",
"text": "The problem of arbitrary object tracking has traditionally been tackled by learning a model of the object’s appearance exclusively online, using as sole training data the video itself. Despite the success of these methods, their online-only approach inherently limits the richness of the model they can learn. Recently, several attempts have been made to exploit the expressive power of deep convolutional networks. However, when the object to track is not known beforehand, it is necessary to perform Stochastic Gradient Descent online to adapt the weights of the network, severely compromising the speed of the system. In this paper we equip a basic tracking algorithm with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video. Our tracker operates at frame-rates beyond real-time and, despite its extreme simplicity, achieves state-of-the-art performance in multiple benchmarks.",
"title": ""
},
{
"docid": "83f1830c3a9a92eb3492f9157adaa504",
"text": "We propose a novel tracking framework called visual tracker sampler that tracks a target robustly by searching for the appropriate trackers in each frame. Since the real-world tracking environment varies severely over time, the trackers should be adapted or newly constructed depending on the current situation. To do this, our method obtains several samples of not only the states of the target but also the trackers themselves during the sampling process. The trackers are efficiently sampled using the Markov Chain Monte Carlo method from the predefined tracker space by proposing new appearance models, motion models, state representation types, and observation types, which are the basic important components of visual trackers. Then, the sampled trackers run in parallel and interact with each other while covering various target variations efficiently. The experiment demonstrates that our method tracks targets accurately and robustly in the real-world tracking environments and outperforms the state-of-the-art tracking methods.",
"title": ""
}
] | [
{
"docid": "17ac85242f7ee4bc4991e54403e827c4",
"text": "Over the last two decades, an impressive progress has been made in the identification of novel factors in the translocation machineries of the mitochondrial protein import and their possible roles. The role of lipids and possible protein-lipids interactions remains a relatively unexplored territory. Investigating the role of potential lipid-binding regions in the sub-units of the mitochondrial motor might help to shed some more light in our understanding of protein-lipid interactions mechanistically. Bioinformatics results seem to indicate multiple potential lipid-binding regions in each of the sub-units. The subsequent characterization of some of those regions in silico provides insight into the mechanistic functioning of this intriguing and essential part of the protein translocation machinery. Details about the way the regions interact with phospholipids were found by the use of Monte Carlo simulations. For example, Pam18 contains one possible transmembrane region and two tilted surface bound conformations upon interaction with phospholipids. The results demonstrate that the presented bioinformatics approach might be useful in an attempt to expand the knowledge of the possible role of protein-lipid interactions in the mitochondrial protein translocation process.",
"title": ""
},
{
"docid": "4f069eeff7cf99679fb6f31e2eea55f0",
"text": "The present study aims to design, develop, operate and evaluate a social media GIS (Geographical Information Systems) specially tailored to mash-up the information that local residents and governments provide to support information utilization from normal times to disaster outbreak times in order to promote disaster reduction. The conclusions of the present study are summarized in the following three points. (1) Social media GIS, an information system which integrates a Web-GIS, an SNS and Twitter in addition to an information classification function, a button function and a ranking function into a single system, was developed. This made it propose an information utilization system based on the assumption of disaster outbreak times when information overload happens as well as normal times. (2) The social media GIS was operated for fifty local residents who are more than 18 years old for ten weeks in Mitaka City of Tokyo metropolis. Although about 32% of the users were in their forties, about 30% were aged fifties, and more than 10% of the users were in their twenties, thirties and sixties or more. (3) The access survey showed that 260 pieces of disaster information were distributed throughout the whole city of Mitaka. Among the disaster information, danger-related information occupied 20%, safety-related information occupied 68%, and other information occupied 12%. Keywords—Social Media GIS; Web-GIS; SNS; Twitter; Disaster Information; Disaster Reduction; Support for Information Utilization",
"title": ""
},
{
"docid": "8fffe94d662d46b977e0312dc790f4a4",
"text": "Airline companies have increasingly employed electronic commerce (eCommerce) for strategic purposes, most notably in order to achieve long-term competitive advantage and global competitiveness by enhancing customer satisfaction as well as marketing efficacy and managerial efficiency. eCommerce has now emerged as possibly the most representative distribution channel in the airline industry. In this study, we describe an extended technology acceptance model (TAM), which integrates subjective norms and electronic trust (eTrust) into the model, in order to determine their relevance to the acceptance of airline business-to-customer (B2C) eCommerce websites (AB2CEWS). The proposed research model was tested empirically using data collected from a survey of customers who had utilized B2C eCommerce websites of two representative airline companies in South Korea (i.e., KAL and ASIANA) for the purpose of purchasing air tickets. Path analysis was employed in order to assess the significance and strength of the hypothesized causal relationships between subjective norms, eTrust, perceived ease of use, perceived usefulness, attitude toward use, and intention to reuse. Our results provide general support for an extended TAM, and also confirmed its robustness in predicting customers’ intention to reuse AB2CEWS. Valuable information was found from our results regarding the management of AB2CEWS in the formulation of airlines’ Internet marketing strategies. 2008 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "3dd8c177ae928f7ccad2aa980bd8c747",
"text": "The quality and nature of knowledge that can be found by an automated knowledge-extraction system depends on its inputs. For systems that learn by reading text, the Web offers a breadth of topics and currency, but it also presents the problems of dealing with casual, unedited writing, non-textual inputs, and the mingling of languages. The results of extraction using the KNEXT system on two Web corpora – Wikipedia and a collection of weblog entries – indicate that, with automatic filtering of the output, even ungrammatical writing on arbitrary topics can yield an extensive knowledge base, which human judges find to be of good quality, with propositions receiving an average score across both corpora of 2.34 (where the range is 1 to 5 and lower is better) versus 3.00 for unfiltered output from the same sources.",
"title": ""
},
{
"docid": "ab5e3f7ad73d8143ae4dc4db40ebfade",
"text": "Knowledge is an essential organizational resource that provides a sustainable competitive advantage in a highly competitive and dynamic economy. SMEs must therefore consider how to promote the sharing of knowledge and expertise between experts who possess it and novices who need to know. Thus, they need to emphaisze and more effectively exploit knowledge-based resources that already exist within the firm. A key issue for the failure of any KM initiative to facilitate knowledge sharing is the lack of consideration of how the organizational and interpersonal context as well as individual characteristics influence knowledge sharing behaviors. Due to the potential benefits that could be realized from knowledge sharing, this study focused on knowledge sharing as one fundamental knowledge-centered activity. Based on the review of previous literature regarding knowledge sharing within and across firms, this study infer that knowledge sharing in a workplace can be influenced by the organizational, individuallevel and technological factors. This study proposes a conceptual model of knowledge sharing within a broad KM framework as an indispensable tool for SMEs internationalization. The model was assessed by using data gathered from employees and managers of twenty-five (25) different SMEs in Norway. The proposed model of knowledge sharing argues that knowledge sharing is influenced by the organizational, individual-level and technological factors. The study also found mediated effect between the organizational factors as well as between the technological factor and knowledge sharing behavior (i.e., being mediated by the individual-level factors). The test results were statistically significant. The organizational factors were acknowledged to have a highly significant role in ensuring that knowledge sharing takes place in the workplace, although the remaining factors play a critical in the knowledge sharing process. For instance, the technological factor may effectively help in creating, storing and distributing explicit knowledge in an accessible and expeditious manner. The implications of the empirical findings are also provided in this study.",
"title": ""
},
{
"docid": "bcf0156fdc95f431c550e0554cddbcbc",
"text": "This paper deals with incremental classification and its particular application to invoice classification. An improved version of an already existant incremental neural network called IGNG (incremental growing neural gas) is used for this purpose. This neural network tries to cover the space of data by adding or deleting neurons as data is fed to the system. The improved version of the IGNG, called I2GNG used local thresholds in order to create or delete neurons. Applied on invoice documents represented with graphs, I2GNG shows a recognition rate of 97.63%.",
"title": ""
},
{
"docid": "363381fbd6a5a19242a432ca80051bba",
"text": "Multimedia data on social websites contain rich semantics and are often accompanied with user-defined tags. To enhance Web media semantic concept retrieval, the fusion of tag-based and content-based models can be used, though it is very challenging. In this article, a novel semantic concept retrieval framework that incorporates tag removal and model fusion is proposed to tackle such a challenge. Tags with useful information can facilitate media search, but they are often imprecise, which makes it important to apply noisy tag removal (by deleting uncorrelated tags) to improve the performance of semantic concept retrieval. Therefore, a multiple correspondence analysis (MCA)-based tag removal algorithm is proposed, which utilizes MCA's ability to capture the relationships among nominal features and identify representative and discriminative tags holding strong correlations with the target semantic concepts. To further improve the retrieval performance, a novel model fusion method is also proposed to combine ranking scores from both tag-based and content-based models, where the adjustment of ranking scores, the reliability of models, and the correlations between the intervals divided on the ranking scores and the semantic concepts are all considered. Comparative results with extensive experiments on the NUS-WIDE-LITE as well as the NUS-WIDE-270K benchmark datasets with 81 semantic concepts show that the proposed framework outperforms baseline results and the other comparison methods with each component being evaluated separately.",
"title": ""
},
{
"docid": "b59e90e5d1fa3f58014dedeea9d5b6e4",
"text": "The results of vitrectomy in 240 consecutive cases of ocular trauma were reviewed. Of these cases, 71.2% were war injuries. Intraocular foreign bodies were present in 155 eyes, of which 74.8% were metallic and 61.9% ferromagnetic. Multivariate analysis identified the prognostic factors predictive of poor visual outcome, which included: (1) presence of an afferent pupillary defect; (2) double perforating injuries; and (3) presence of intraocular foreign bodies. Association of vitreous hemorrhage with intraocular foreign bodies was predictive of a poor prognosis. Eyes with foreign bodies retained in the anterior segment and vitreous had a better prognosis than those with foreign bodies embedded in the retina. Timing of vitrectomy and type of trauma had no significant effect on the final visual results. Prophylactic scleral buckling reduced the incidence of retinal detachment after surgery. Injuries confined to the cornea had a better prognosis than scleral injuries.",
"title": ""
},
{
"docid": "83cc283967bf6bc7f04729a5e08660e2",
"text": "Logicians have, by and large, engaged in the convenient fiction that sentences of natural languages (at least declarative sentences) are either true or false or, at worst, lack a truth value, or have a third value often interpreted as 'nonsense'. And most contemporary linguists who have thought seriously about semantics, especially formal semantics, have largely shared this fiction, primarily for lack of a sensible alternative. Yet students o f language, especially psychologists and linguistic philosophers, have long been attuned to the fact that natural language concepts have vague boundaries and fuzzy edges and that, consequently, natural language sentences will very often be neither true, nor false, nor nonsensical, but rather true to a certain extent and false to a certain extent, true in certain respects and false in other respects. It is common for logicians to give truth conditions for predicates in terms of classical set theory. 'John is tall' (or 'TALL(j) ' ) is defined to be true just in case the individual denoted by 'John' (or ' j ') is in the set of tall men. Putting aside the problem that tallness is really a relative concept (tallness for a pygmy and tallness for a basketball player are obviously different) 1, suppose we fix a population relative to which we want to define tallness. In contemporary America, how tall do you have to be to be tall? 5'8\"? 5'9\"? 5'10\"? 5'11\"? 6'? 6'2\"? Obviously there is no single fixed answer. How old do you have to be to be middle-aged? 35? 37? 39? 40? 42? 45? 50? Again the concept is fuzzy. Clearly any attempt to limit truth conditions for natural language sentences to true, false and \"nonsense' will distort the natural language concepts by portraying them as having sharply defined rather than fuzzily defined boundaries. Work dealing with such questions has been done in psychology. To take a recent example, Eleanor Rosch Heider (1971) took up the question of whether people perceive category membership as a clearcut issue or a matter of degree. For example, do people think of members of a given",
"title": ""
},
{
"docid": "f1efe8868f19ccbb4cf2ab5c08961cdb",
"text": "High peak-to-average power ratio (PAPR) has been one of the major drawbacks of orthogonal frequency division multiplexing (OFDM) systems. In this letter, we propose a novel PAPR reduction scheme, known as PAPR reducing network (PRNet), based on the autoencoder architecture of deep learning. In the PRNet, the constellation mapping and demapping of symbols on each subcarrier is determined adaptively through a deep learning technique, such that both the bit error rate (BER) and the PAPR of the OFDM system are jointly minimized. We used simulations to show that the proposed scheme outperforms conventional schemes in terms of BER and PAPR.",
"title": ""
},
{
"docid": "d88b845296811f881e46ed04e6caca31",
"text": "OBJECTIVES\nThis study evaluated how patient characteristics and duplex ultrasound findings influence management decisions of physicians with specific expertise in the field of chronic venous disease.\n\n\nMETHODS\nWorldwide, 346 physicians with a known interest and experience in phlebology were invited to participate in an online survey about management strategies in patients with great saphenous vein (GSV) reflux and refluxing tributaries. The survey included two basic vignettes representing a 47 year old healthy male with GSV reflux above the knee and a 27 year old healthy female with a short segment refluxing GSV (CEAP classification C2sEpAs2,5Pr in both cases). Participants could choose one or more treatment options. Subsequently, the basic vignettes were modified according to different patient characteristics (e.g. older age, morbid obesity, anticoagulant treatment, peripheral arterial disease), clinical class (C4, C6), and duplex ultrasound findings (e.g. competent terminal valve, larger or smaller GSV diameter, presence of focal dilatation). The authors recorded the distribution of chosen management strategies; adjustment of strategies according to characteristics; and follow up strategies.\n\n\nRESULTS\nA total of 211 physicians (68% surgeons, 12% dermatologists, 12% angiologists, and 8% phlebologists) from 36 different countries completed the survey. In the basic case vignettes 1 and 2, respectively, 55% and 40% of participants proposed to perform endovenous thermal ablation, either with or without concomitant phlebectomies (p < .001). Looking at the modified case vignettes, between 20% and 64% of participants proposed to adapt their management strategy, opting for either a more or a less invasive treatment, depending on the modification introduced. The distribution of chosen management strategies changed significantly for all modified vignettes (p < .05).\n\n\nCONCLUSIONS\nThis study illustrates the worldwide variety in management preferences for treating patients with varicose veins (C2-C6). In clinical practice, patient related and duplex ultrasound related factors clearly influence therapeutic options.",
"title": ""
},
{
"docid": "c1c044c7ede9cfde42878ea162d1f457",
"text": "When designing the rotor of a radial flux permanent magnet synchronous machine (PMSM), one key part is the sizing of the permanent magnets (PM) in the rotor to produce the required air-gap flux density. This paper focuses on the effect that different coefficients have on the air-gap flux density of four radial flux PMSM rotor topologies. A direct connection is shown between magnet volume and flux producing magnet area with the aid of static finite element model simulations of the four rotor topologies. With this knowledge, the calculation of the flux producing magnet area can be done with ease once the minimum magnet volume has been determined. This technique can also be applied in the design of line-start PMSM rotors where the rotor area is limited.",
"title": ""
},
{
"docid": "082f19bb94536f61a7c9e4edd9a9c829",
"text": "Phytoplankton abundance and composition and the cyanotoxin, microcystin, were examined relative to environmental parameters in western Lake Erie during late-summer (2003–2005). Spatially explicit distributions of phytoplankton occurred on an annual basis, with the greatest chlorophyll (Chl) a concentrations occurring in waters impacted by Maumee River inflows and in Sandusky Bay. Chlorophytes, bacillariophytes, and cyanobacteria contributed the majority of phylogenetic-group Chl a basin-wide in 2003, 2004, and 2005, respectively. Water clarity, pH, and specific conductance delineated patterns of group Chl a, signifying that water mass movements and mixing were primary determinants of phytoplankton accumulations and distributions. Water temperature, irradiance, and phosphorus availability delineated patterns of cyanobacterial biovolumes, suggesting that biotic processes (most likely, resource-based competition) controlled cyanobacterial abundance and composition. Intracellular microcystin concentrations corresponded to Microcystis abundance and environmental parameters indicative of conditions coincident with biomass accumulations. It appears that environmental parameters regulate microcystin indirectly, via control of cyanobacterial abundance and distribution.",
"title": ""
},
{
"docid": "43269c32b765b0f5d5d0772e0b1c5906",
"text": "Silver nanoparticles (AgNPs) have been synthesized by Lantana camara leaf extract through simple green route and evaluated their antibacterial and catalytic activities. The leaf extract (LE) itself acts as both reducing and stabilizing agent at once for desired nanoparticle synthesis. The colorless reaction mixture turns to yellowish brown attesting the AgNPs formation and displayed UV-Vis absorption spectra. Structural analysis confirms the crystalline nature and formation of fcc structured metallic silver with majority (111) facets. Morphological studies elicit the formation of almost spherical shaped nanoparticles and as AgNO3 concentration is increased, there is an increment in the particle size. The FTIR analysis evidences the presence of various functional groups of biomolecules of LE is responsible for stabilization of AgNPs. Zeta potential measurement attests the higher stability of synthesized AgNPs. The synthesized AgNPs exhibited good antibacterial activity when tested against Escherichia coli, Pseudomonas spp., Bacillus spp. and Staphylococcus spp. using standard Kirby-Bauer disc diffusion assay. Furthermore, they showed good catalytic activity on the reduction of methylene blue by L. camara extract which is monitored and confirmed by the UV-Vis spectrophotometer.",
"title": ""
},
{
"docid": "966f5ff1ef057f2d19d10865eef35728",
"text": "Recognition of characters in natural images is a challenging task due to the complex background, variations of text size and perspective distortion, etc. Traditional optical character recognition (OCR) engine cannot perform well on those unconstrained text images. A novel technique is proposed in this paper that makes use of convolutional cooccurrence histogram of oriented gradient (ConvCoHOG), which is more robust and discriminative than both the histogram of oriented gradient (HOG) and the co-occurrence histogram of oriented gradients (CoHOG). In the proposed technique, a more informative feature is constructed by exhaustively extracting features from every possible image patches within character images. Experiments on two public datasets including the ICDAr 2003 Robust Reading character dataset and the Street View Text (SVT) dataset, show that our proposed character recognition technique obtains superior performance compared with state-of-the-art techniques.",
"title": ""
},
{
"docid": "4768b338044e38949f50c5856bc1a07c",
"text": "Radio-frequency identification (RFID) technology provides an effective tool for managing traceability along food supply chains. This is because it allows automatic digital registration of data, and therefore reduces errors and enables the availability of information on demand. A complete traceability system can be developed in the wine production sector by joining this technology with the use of wireless sensor networks for monitoring at the vineyards. A proposal of such a merged solution for a winery in Spain has been designed, deployed in an actual environment, and evaluated. It was shown that the system could provide a competitive advantage to the company by improving visibility of the processes performed and the associated control over product quality. Much emphasis has been placed on minimizing the impact of the new system in the current activities.",
"title": ""
},
{
"docid": "3b074e9574838169881e212cb5899d27",
"text": "The introduction of inexpensive 3D data acquisition devices has promisingly facilitated the wide availability and popularity of 3D point cloud, which attracts more attention on the effective extraction of novel 3D point cloud descriptors for accurate and efficient of 3D computer vision tasks. However, how to develop discriminative and robust feature descriptors from various point clouds remains a challenging task. This paper comprehensively investigates the existing approaches for extracting 3D point cloud descriptors which are categorized into three major classes: local-based descriptor, global-based descriptor and hybrid-based descriptor. Furthermore, experiments are carried out to present a thorough evaluation of performance of several state-of-the-art 3D point cloud descriptors used widely in practice in terms of descriptiveness, robustness and efficiency.",
"title": ""
},
{
"docid": "261ab16552e2f7cfcdf89971a066a812",
"text": "The paper demonstrates that in a multi-voltage level (medium and low-voltages) distribution system the incident energy can be reduced to 8 cal/cm2, or even less, (Hazard risk category, HRC 2), so that a PPE outfit of greater than 2 is not required. This is achieved with the current state of the art equipment and protective devices. It is recognized that in the existing distribution systems, not specifically designed with this objective, it may not be possible to reduce arc flash hazard to this low level, unless major changes in the system design and protection are made. A typical industrial distribution system is analyzed, and tables and time coordination plots are provided to support the analysis. Unit protection schemes and practical guidelines for arc flash reduction are provided. The methodology of IEEE 1584 [1] is used for the analyses.",
"title": ""
},
{
"docid": "05540e05370b632f8b8cd165ae7d1d29",
"text": "We describe FreeCam a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.",
"title": ""
},
{
"docid": "65af21566422d9f0a11f07d43d7ead13",
"text": "Scene labeling is a challenging computer vision task. It requires the use of both local discriminative features and global context information. We adopt a deep recurrent convolutional neural network (RCNN) for this task, which is originally proposed for object recognition. Different from traditional convolutional neural networks (CNN), this model has intra-layer recurrent connections in the convolutional layers. Therefore each convolutional layer becomes a two-dimensional recurrent neural network. The units receive constant feed-forward inputs from the previous layer and recurrent inputs from their neighborhoods. While recurrent iterations proceed, the region of context captured by each unit expands. In this way, feature extraction and context modulation are seamlessly integrated, which is different from typical methods that entail separate modules for the two steps. To further utilize the context, a multi-scale RCNN is proposed. Over two benchmark datasets, Standford Background and Sift Flow, the model outperforms many state-of-the-art models in accuracy and efficiency.",
"title": ""
}
] | scidocsrr |
9e2c4c0d69b90a20ed2731cacaff4673 | Robot arm control exploiting natural dynamics | [
{
"docid": "56316a77e260d8122c4812d684f4d223",
"text": "Manipulation fundamentally requires a manipulator to be mechanically coupled to the object being manipulated. A consideration of the physical constraints imposed by dynamic interaction shows that control of a vector quantity such as position or force is inadequate and that control of the manipulator impedance is also necessary. Techniques for control of manipulator behaviour are presented which result in a unified approach to kinematically constrained motion, dynamic interaction, target acquisition and obstacle avoidance.",
"title": ""
}
] | [
{
"docid": "41eab64d00f1a4aaea5c5899074d91ca",
"text": "Informally described design patterns are useful for communicating proven solutions for recurring design problems to developers, but they cannot be used as compliance points against which solutions that claim to conform to the patterns are checked. Pattern specification languages that utilize mathematical notation provide the needed formality, but often at the expense of usability. We present a rigorous and practical technique for specifying pattern solutions expressed in the unified modeling language (UML). The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models. The technique has been used to create specifications of solutions for several popular design patterns. We illustrate the use of the technique by specifying observer and visitor pattern solutions.",
"title": ""
},
{
"docid": "c49ffcb45cc0a7377d9cbdcf6dc07057",
"text": "Dermoscopy is an in vivo method for the early diagnosis of malignant melanoma and the differential diagnosis of pigmented lesions of the skin. It has been shown to increase diagnostic accuracy over clinical visual inspection in the hands of experienced physicians. This article is a review of the principles of dermoscopy as well as recent technological developments.",
"title": ""
},
{
"docid": "b0eea601ef87dbd1d7f39740ea5134ae",
"text": "Syndromal classification is a well-developed diagnostic system but has failed to deliver on its promise of the identification of functional pathological processes. Functional analysis is tightly connected to treatment but has failed to develop testable. replicable classification systems. Functional diagnostic dimensions are suggested as a way to develop the functional classification approach, and experiential avoidance is described as 1 such dimension. A wide range of research is reviewed showing that many forms of psychopathology can be conceptualized as unhealthy efforts to escape and avoid emotions, thoughts, memories, and other private experiences. It is argued that experiential avoidance, as a functional diagnostic dimension, has the potential to integrate the efforts and findings of researchers from a wide variety of theoretical paradigms, research interests, and clinical domains and to lead to testable new approaches to the analysis and treatment of behavioral disorders. Steven C. Haves, Kelly G. Wilson, Elizabeth V. Gifford, and Victoria M. Follette. Department of Psychology. University of Nevada: Kirk Strosahl, Mental Health Center, Group Health Cooperative, Seattle, Washington. Preparation of this article was supported in part by Grant DA08634 from the National Institute on Drug Abuse. Correspondence concerning this article should be addressed to Steven C. Hayes, Department of Psychology, Mailstop 296, College of Arts and Science. University of Nevada, Reno, Nevada 89557-0062. The process of classification lies at the root of all scientific behavior. It is literally impossible to speak about a truly unique event, alone and cut off from all others, because words themselves are means of categorization (Brunei, Goodnow, & Austin, 1956). Science is concerned with refined and systematic verbal formulations of events and relations among events. Because \"events\" are always classes of events, and \"relations\" are always classes of relations, classification is one of the central tasks of science. The field of psychopathology has seen myriad classification systems (Hersen & Bellack, 1988; Sprock & Blashfield, 1991). The differences among some of these approaches are both long-standing and relatively unchanging, in part because systems are never free from a priori assumptions and guiding principles that provide a framework for organizing information (Adams & Cassidy, 1993). In the present article, we briefly examine the differences between two core classification strategies in psychopathology syndromal and functional. We then articulate one possible functional diagnostic dimension: experiential avoidance. Several common syndromal categories are examined to see how this dimension can organize data found among topographical groupings. Finally, the utility and implications of this functional dimensional category are examined. Comparing Syndromal and Functional Classification Although there are many purposes to diagnostic classification, most researchers seem to agree that the ultimate goal is the development of classes, dimensions, or relational categories that can be empirically wedded to treatment strategies (Adams & Cassidy, 1993: Hayes, Nelson & Jarrett, 1987: Meehl, 1959). Syndromal classification – whether dimensional or categorical – can be traced back to Wundt and Galen and, thus, is as old as scientific psychology itself (Eysenck, 1986). Syndromal classification starts with constellations of signs and symptoms to identify the disease entities that are presumed to give rise to these constellations. Syndromal classification thus starts with structure and, it is hoped, ends with utility. The attempt in functional classification, conversely, is to start with utility by identifying functional processes with clear treatment implications. It then works backward and returns to the issue of identifiable signs and symptoms that reflect these processes. These differences are fundamental. Syndromal Classification The economic and political dominance of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (e.g., 4th ed.; DSM -IV; American Psychiatric Association, 1994) has lead to a worldwide adoption of syndromal classification as an analytic strategy in psychopathology. The only widely used alternative, the International Classification of Diseases (ICD) system, was a source document for the original DSM, and continuous efforts have been made to ensure their ongoing compatibility (American Psychiatric Association 1994). The immediate goal of syndromal classification (Foulds. 1971) is to identify collections of signs (what one sees) and symptoms (what the client's complaint is). The hope is that these syndromes will lead to the identification of disorders with a known etiology, course, and response to treatment. When this has been achieved, we are no longer speaking of syndromes but of diseases. Because the construct of disease involves etiology and response to treatment, these classifications are ultimately a kind of functional unit. Thus, the syndromal classification approach is a topographically oriented classification strategy for the identification of functional units of abnormal behavior. When the same topographical outcome can be established by diverse processes, or when very different topographical outcomes can come from the same process, the syndromal model has a difficult time actually producing its intended functional units (cf. Bandura, 1982; Meehl, 1978). Some medical problems (e.g., cancer) have these features, and in these areas medical researchers no longer look to syndromal classification as a quick route to an understanding of the disease processes involved. The link between syndromes (topography of signs and symptoms) and diseases (function) has been notably weak in psychopathology. After over 100 years of effort, almost no psychological diseases have been clearly identified. With the exception of general paresis and a few clearly neurological disorders, psychiatric syndromes have remained syndromes indefinitely. In the absence of progress toward true functional entities, syndromal classification of psychopathology has several down sides. Symptoms are virtually non-falsifiable, because they depend only on certain formal features. Syndromal categories tend to evolve changing their names frequently and splitting into ever finer subcategories but except for political reasons (e.g., homosexuality as a disorder) they rarely simply disappear. As a result, the number of syndromes within the DSM system has increased exponentially (Follette, Houts, & Hayes, 1992). Increasingly refined topographical distinctions can always be made without the restraining and synthesizing effect of the identification of common etiological processes. In physical medicine, syndromes regularly disappear into disease categories. A wide variety of symptoms can be caused by a single disease, or a common symptom can be explained by very different diseases entities. For example, \"headaches\" are not a disease, because they could be due to influenza, vision problems, ruptured blood vessels, or a host of other factors. These etiological factors have very different treatment implications. Note that the reliability of symptom detection is not what is at issue. Reliably diagnosing headaches does not translate into reliably diagnosing the underlying functional entity, which after all is the crucial factor for treatment decisions. In the same way, the increasing reliability of DSM diagnoses is of little consolation in and of itself. The DSM system specifically eschews the primary importance of functional processes: \"The approach taken in DSM-III is atheoretical with regard to etiology or patho-physiological process\" (American Psychiatric Association, 1980, p. 7). This spirit of etiological agnosticism is carried forward in the most recent DSM incarnation. It is meant to encourage users from widely varying schools of psychology to use the same classification system. Although integration is a laudable goal, the price paid may have been too high (Follette & Hayes, 1992). For example, the link between syndromal categories and biological markers or change processes has been consistently disappointing. To date, compellingly sensitive and specific physiological markers have not been identified for any psychiatric syndrome (Hoes, 1986). Similarly, the link between syndromes and differential treatment has long been known to be weak (see Hayes et al., 1987). We still do not have compelling evidence that syndromal classification contributes substantially to treatment outcome (Hayes et al., 1987). Even in those few instances and not others, mechanisms of change are often unclear of unexamined (Follette, 1995), in part because syndromal categories give researchers few leads about where even to look. Without attention to etiology, treatment utility, and pathological process, the current syndromal system seems unlikely to evolve rapidly into a functional, theoretically relevant system. Functional Classification In a functional approach to classification, the topographical characteristics of any particular individual's behavior is not the basis for classification; instead, behaviors and sets of behaviors are organized by the functional processes that are thought to have produced and maintained them. This functional method is inherently less direct and naive than a syndromal approach, as it requires the application of pre-existing information about psychological processes to specific response forms. It thus integrates at least rudimentary forms of theory into the classification strategy, in sharp contrast with the atheoretical goals of the DSM system. Functional Diagnostic Dimensions as a Method of Functional Classification Classical functional analysis is the most dominant example of a functional classification system. It consists of six steps (Hayes & Follette, 1992) -Step 1: identify potentially relevant characterist",
"title": ""
},
{
"docid": "ef9947c8f478d6274fcbcf8c9e300806",
"text": "The introduction in 1998 of multi-detector row computed tomography (CT) by the major CT vendors was a milestone with regard to increased scan speed, improved z-axis spatial resolution, and better utilization of the available x-ray power. In this review, the general technical principles of multi-detector row CT are reviewed as they apply to the established four- and eight-section systems, the most recent 16-section scanners, and future generations of multi-detector row CT systems. Clinical examples are used to demonstrate both the potential and the limitations of the different scanner types. When necessary, standard single-section CT is referred to as a common basis and starting point for further developments. Another focus is the increasingly important topic of patient radiation exposure, successful dose management, and strategies for dose reduction. Finally, the evolutionary steps from traditional single-section spiral image-reconstruction algorithms to the most recent approaches toward multisection spiral reconstruction are traced.",
"title": ""
},
{
"docid": "de9e0e080ec3210d771bfffb426e0245",
"text": "PURPOSE\nTo compare adults who stutter with and without support group experience on measures of self-esteem, self-efficacy, life satisfaction, self-stigma, perceived stuttering severity, perceived origin and future course of stuttering, and importance of fluency.\n\n\nMETHOD\nParticipants were 279 adults who stutter recruited from the National Stuttering Association and Board Recognized Specialists in Fluency Disorders. Participants completed a Web-based survey comprised of various measures of well-being including the Rosenberg Self-Esteem Scale, Generalized Self-Efficacy Scale, Satisfaction with Life Scale, a measure of perceived stuttering severity, the Self-Stigma of Stuttering Scale, and other stuttering-related questions.\n\n\nRESULTS\nParticipants with support group experience as a whole demonstrated lower internalized stigma, were more likely to believe that they would stutter for the rest of their lives, and less likely to perceive production of fluent speech as being highly or moderately important when talking to other people, compared to participants with no support group experience. Individuals who joined support groups to help others feel better about themselves reported higher self-esteem, self-efficacy, and life satisfaction, and lower internalized stigma and perceived stuttering severity, compared to participants with no support group experience. Participants who stutter as an overall group demonstrated similar levels of self-esteem, higher self-efficacy, and lower life satisfaction compared to averages from normative data for adults who do not stutter.\n\n\nCONCLUSIONS\nFindings support the notion that self-help support groups limit internalization of negative attitudes about the self, and that focusing on helping others feel better in a support group context is linked to higher levels of psychological well-being.\n\n\nEDUCATIONAL OBJECTIVES\nAt the end of this activity the reader will be able to: (a) describe the potential psychological benefits of stuttering self-help support groups for people who stutter, (b) contrast between important aspects of well-being including self-esteem self-efficacy, and life satisfaction, (c) summarize differences in self-esteem, self-efficacy, life satisfaction, self-stigma, perceived stuttering severity, and perceptions of stuttering between adults who stutter with and without support group experience, (d) summarize differences in self-esteem, self-efficacy, and life satisfaction between adults who stutter and normative data for adults who do not stutter.",
"title": ""
},
{
"docid": "c8e34c208f11c367e1f131edaa549c20",
"text": "Recently one dimensional (1-D) nanostructured metal-oxides have attracted much attention because of their potential applications in gas sensors. 1-D nanostructured metal-oxides provide high surface to volume ratio, while maintaining good chemical and thermal stabilities with minimal power consumption and low weight. In recent years, various processing routes have been developed for the synthesis of 1-D nanostructured metal-oxides such as hydrothermal, ultrasonic irradiation, electrospinning, anodization, sol-gel, molten-salt, carbothermal reduction, solid-state chemical reaction, thermal evaporation, vapor-phase transport, aerosol, RF sputtering, molecular beam epitaxy, chemical vapor deposition, gas-phase assisted nanocarving, UV lithography and dry plasma etching. A variety of sensor fabrication processing routes have also been developed. Depending on the materials, morphology and fabrication process the performance of the sensor towards a specific gas shows a varying degree of success. This article reviews and evaluates the performance of 1-D nanostructured metal-oxide gas sensors based on ZnO, SnO(2), TiO(2), In(2)O(3), WO(x), AgVO(3), CdO, MoO(3), CuO, TeO(2) and Fe(2)O(3). Advantages and disadvantages of each sensor are summarized, along with the associated sensing mechanism. Finally, the article concludes with some future directions of research.",
"title": ""
},
{
"docid": "35fbdf776186afa7d8991fa4ff22503d",
"text": "Lang Linguist Compass 2016; 10: 701–719 wileyo Abstract Research and industry are becoming more and more interested in finding automatically the polarised opinion of the general public regarding a specific subject. The advent of social networks has opened the possibility of having access to massive blogs, recommendations, and reviews. The challenge is to extract the polarity from these data, which is a task of opinion mining or sentiment analysis. The specific difficulties inherent in this task include issues related to subjective interpretation and linguistic phenomena that affect the polarity of words. Recently, deep learning has become a popular method of addressing this task. However, different approaches have been proposed in the literature. This article provides an overview of deep learning for sentiment analysis in order to place these approaches in context.",
"title": ""
},
{
"docid": "5fa019a88de4a1683ee63b2a25f8c285",
"text": "Metabolomics is increasingly being applied towards the identification of biomarkers for disease diagnosis, prognosis and risk prediction. Unfortunately among the many published metabolomic studies focusing on biomarker discovery, there is very little consistency and relatively little rigor in how researchers select, assess or report their candidate biomarkers. In particular, few studies report any measure of sensitivity, specificity, or provide receiver operator characteristic (ROC) curves with associated confidence intervals. Even fewer studies explicitly describe or release the biomarker model used to generate their ROC curves. This is surprising given that for biomarker studies in most other biomedical fields, ROC curve analysis is generally considered the standard method for performance assessment. Because the ultimate goal of biomarker discovery is the translation of those biomarkers to clinical practice, it is clear that the metabolomics community needs to start “speaking the same language” in terms of biomarker analysis and reporting-especially if it wants to see metabolite markers being routinely used in the clinic. In this tutorial, we will first introduce the concept of ROC curves and describe their use in single biomarker analysis for clinical chemistry. This includes the construction of ROC curves, understanding the meaning of area under ROC curves (AUC) and partial AUC, as well as the calculation of confidence intervals. The second part of the tutorial focuses on biomarker analyses within the context of metabolomics. This section describes different statistical and machine learning strategies that can be used to create multi-metabolite biomarker models and explains how these models can be assessed using ROC curves. In the third part of the tutorial we discuss common issues and potential pitfalls associated with different analysis methods and provide readers with a list of nine recommendations for biomarker analysis and reporting. To help readers test, visualize and explore the concepts presented in this tutorial, we also introduce a web-based tool called ROCCET (ROC Curve Explorer & Tester, http://www.roccet.ca ). ROCCET was originally developed as a teaching aid but it can also serve as a training and testing resource to assist metabolomics researchers build biomarker models and conduct a range of common ROC curve analyses for biomarker studies.",
"title": ""
},
{
"docid": "28f9a2b2f6f4e90de20c6af78727b131",
"text": "The detection and potential removal of duplicates is desirable for a number of reasons, such as to reduce the need for unnecessary storage and computation, and to provide users with uncluttered search results. This paper describes an investigation into the application of scalable simhash and shingle state of the art duplicate detection algorithms for detecting near duplicate documents in the CiteSeerX digital library. We empirically explored the duplicate detection methods and evaluated their performance and application to academic documents and identified good parameters for the algorithms. We also analyzed the types of near duplicates identified by each algorithm. The highest F-scores achieved were 0.91 and 0.99 for the simhash and shingle-based methods respectively. The shingle-based method also identified a larger variety of duplicate types than the simhash-based method.",
"title": ""
},
{
"docid": "33b2c5abe122a66b73840506aa3b443e",
"text": "Semantic role labeling, the computational identification and labeling of arguments in text, has become a leading task in computational linguistics today. Although the issues for this task have been studied for decades, the availability of large resources and the development of statistical machine learning methods have heightened the amount of effort in this field. This special issue presents selected and representative work in the field. This overview describes linguistic background of the problem, the movement from linguistic theories to computational practice, the major resources that are being used, an overview of steps taken in computational systems, and a description of the key issues and results in semantic role labeling (as revealed in several international evaluations). We assess weaknesses in semantic role labeling and identify important challenges facing the field. Overall, the opportunities and the potential for useful further research in semantic role labeling are considerable.",
"title": ""
},
{
"docid": "b248655d158da77d257a243ee331aa34",
"text": "Paraphrase identification is a fundamental task in natural language process areas. During the process of fulfilling this challenge, different features are exploited. Semantically equivalence and syntactic similarity are of the most importance. Apart from advance feature extraction, deep learning based models are also proven their promising in natural language process jobs. As a result in this research, we adopted an interactive representation to modelling the relationship between two sentences not only on word level, but also on phrase and sentence level by employing convolution neural network to conduct paraphrase identification by using semantic and syntactic features at the same time. The experimental study on commonly used MSRP has shown the proposed method's promising potential.",
"title": ""
},
{
"docid": "31d14e88b7c1aa953c1efac75da26d24",
"text": "This session will focus on ways that Python is being used to successfully facilitate introductory computer science courses. After a brief introduction, we will present three different models for CS1 and CS2 using Python. Attendees will then participate in a discussion/question-answer session considering the advantages and challenges of using Python in the introductory courses. The presenters will focus on common issues, both positive and negative, that have arisen from the inclusion of Python in the introductory computer science curriculum as well as the impact that this can have on the entire computer science curriculum.",
"title": ""
},
{
"docid": "d39ada44eb3c1c9b5dfa1abd0f1fbc22",
"text": "The ability to computationally predict whether a compound treats a disease would improve the economy and success rate of drug approval. This study describes Project Rephetio to systematically model drug efficacy based on 755 existing treatments. First, we constructed Hetionet (neo4j.het.io), an integrative network encoding knowledge from millions of biomedical studies. Hetionet v1.0 consists of 47,031 nodes of 11 types and 2,250,197 relationships of 24 types. Data were integrated from 29 public resources to connect compounds, diseases, genes, anatomies, pathways, biological processes, molecular functions, cellular components, pharmacologic classes, side effects, and symptoms. Next, we identified network patterns that distinguish treatments from non-treatments. Then, we predicted the probability of treatment for 209,168 compound-disease pairs (het.io/repurpose). Our predictions validated on two external sets of treatment and provided pharmacological insights on epilepsy, suggesting they will help prioritize drug repurposing candidates. This study was entirely open and received realtime feedback from 40 community members.",
"title": ""
},
{
"docid": "895f0424cb71c79b86ecbd11a4f2eb8e",
"text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
},
{
"docid": "c7eb67093a6f00bec0d96607e6384378",
"text": "Two primary simulations have been developed and are being updated for the Mars Smart Lander Entry, Descent, and Landing (EDL). The high fidelity engineering end-to-end EDL simulation that is based on NASA Langley’s Program to Optimize Simulated Trajectories (POST) and the end-to-end real-time, hardware-in-the-loop simulation test bed, which is based on NASA JPL’s Dynamics Simulator for Entry, Descent and Surface landing (DSENDS). This paper presents the status of these Mars Smart Lander EDL end-to-end simulations at this time. Various models, capabilities, as well as validation and verification for these simulations are discussed.",
"title": ""
},
{
"docid": "cd52e8b57646a81d985b2fab9083bda9",
"text": "Tagging of faces present in a photo or video at shot level has multiple applications related to indexing and retrieval. Face clustering, which aims to group similar faces corresponding to an individual, is a fundamental step of face tagging. We present a progressive method of applying easy-to-hard grouping technique that applies increasingly sophisticated feature descriptors and classifiers on reducing number of faces from each of the iteratively generated clusters. Our primary goal is to design a cost effective solution for deploying it on low-power devices like mobile phones. First, the method initiates the clustering process by applying K-Means technique with relatively large K value on simple LBP features to generate the first set of high precision clusters. Multiple clusters generated for each individual (low recall) are then progressively merged by applying linear and non-linear subspace modelling strategies on custom selected sophisticated features like Gabor filter, Gabor Jets, and Spin LGBP (Local Gabor Binary Patterns) with spatially spinning bin support for histogram computation. Our experiments on the standard face databases like YouTube Faces, YouTube Celebrities, Indian Movie Face database, eNTERFACE, Multi-Pie, CK+, MindReading and internally collected mobile phone samples demonstrate the effectiveness of proposed approach as compared to state-of-the-art methods and a commercial solution on a mobile phone.",
"title": ""
},
{
"docid": "aeb3e0b089e658b532b3ed6c626898dd",
"text": "Semantics is seen as the key ingredient in the next phase of the Web infrastructure as well as the next generation of information systems applications. In this context, we review some of the reservations expressed about the viability of the Semantic Web. We respond to these by identifying a Semantic Technology that supports the key capabilities also needed to realize the Semantic Web vision, namely representing, acquiring and utilizing knowledge. Given that scalability is a key challenge, we briefly review our observations from developing three classes of real world applications and corresponding technology components: search/browsing, integration, and analytics. We distinguish this proven technology from some parts of the Semantic Web approach and offer subjective remarks which we hope will foster additional debate.",
"title": ""
},
{
"docid": "ddb7d3fc66cf693e3283caa1d7f988a1",
"text": "Customer reviews on online shopping platforms have potential commercial value. Realizing business intelligence by automatically extracting customers’ emotional attitude toward product features from a large amount of reviews, through fine-grained sentiment analysis, is of great importance. Long short-term memory (LSTM) network performs well in sentiment analysis of English reviews. A novel method that extended the network to Chinese product reviews was proposed to improve the performance of sentiment analysis on Chinese product reviews. Considering the differences between Chinese and English, a series of revisions were made to the Chinese corpus, such as word segmentation and stop word pruning. The review corpora vectorization was achieved by word2vec, and a LSTM network model was established based on the mathematical theories of the recurrent neural network. Finally, the feasibility of the LSTM model in finegrained sentiment analysis on Chinese product reviews was verified via experiment. Results demonstrate that the maximum accuracy of the experiment is 90.74%, whereas the maximum of F-score is 65.47%. The LSTM network proves to be feasible and effective when applied to sentiment analysis on product features of Chinese customer reviews. The performance of the LSTM network on fine-grained sentiment analysis is noticeably superior to that of the traditional machine learning method. This study provides references for fine-grained sentiment analysis on Chinese customer",
"title": ""
},
{
"docid": "11333e88e8ff98422bdbf7d7846e9807",
"text": "As a fundamental task, document similarity measure has broad impact to document-based classification, clustering and ranking. Traditional approaches represent documents as bag-of-words and compute document similarities using measures like cosine, Jaccard, and dice. However, entity phrases rather than single words in documents can be critical for evaluating document relatedness. Moreover, types of entities and links between entities/words are also informative. We propose a method to represent a document as a typed heterogeneous information network (HIN), where the entities and relations are annotated with types. Multiple documents can be linked by the words and entities in the HIN. Consequently, we convert the document similarity problem to a graph distance problem. Intuitively, there could be multiple paths between a pair of documents. We propose to use the meta-path defined in HIN to compute distance between documents. Instead of burdening user to define meaningful meta paths, an automatic method is proposed to rank the meta-paths. Given the meta-paths associated with ranking scores, an HIN-based similarity measure, KnowSim, is proposed to compute document similarities. Using Freebase, a well-known world knowledge base, to conduct semantic parsing and construct HIN for documents, our experiments on 20Newsgroups and RCV1 datasets show that KnowSim generates impressive high-quality document clustering.",
"title": ""
}
] | scidocsrr |
9533e12030829a8be8bc9b5ea1b1f59b | EmotionCheck: leveraging bodily signals and false feedback to regulate our emotions | [
{
"docid": "e88bac9a4023b1c741c720e034669109",
"text": "We present AffectAura, an emotional prosthetic that allows users to reflect on their emotional states over long periods of time. We designed a multimodal sensor set-up for continuous logging of audio, visual, physiological and contextual data, a classification scheme for predicting user affective state and an interface for user reflection. The system continuously predicts a user's valence, arousal and engage-ment, and correlates this with information on events, communications and data interactions. We evaluate the interface through a user study consisting of six users and over 240 hours of data, and demonstrate the utility of such a reflection tool. We show that users could reason forward and backward in time about their emotional experiences using the interface, and found this useful.",
"title": ""
},
{
"docid": "c767a9b6808b4556c6f55dd406f8eb0d",
"text": "BACKGROUND\nInterest in mindfulness has increased exponentially, particularly in the fields of psychology and medicine. The trait or state of mindfulness is significantly related to several indicators of psychological health, and mindfulness-based therapies are effective at preventing and treating many chronic diseases. Interest in mobile applications for health promotion and disease self-management is also growing. Despite the explosion of interest, research on both the design and potential uses of mindfulness-based mobile applications (MBMAs) is scarce.\n\n\nOBJECTIVE\nOur main objective was to study the features and functionalities of current MBMAs and compare them to current evidence-based literature in the health and clinical setting.\n\n\nMETHODS\nWe searched online vendor markets, scientific journal databases, and grey literature related to MBMAs. We included mobile applications that featured a mindfulness-based component related to training or daily practice of mindfulness techniques. We excluded opinion-based articles from the literature.\n\n\nRESULTS\nThe literature search resulted in 11 eligible matches, two of which completely met our selection criteria-a pilot study designed to evaluate the feasibility of a MBMA to train the practice of \"walking meditation,\" and an exploratory study of an application consisting of mood reporting scales and mindfulness-based mobile therapies. The online market search eventually analyzed 50 available MBMAs. Of these, 8% (4/50) did not work, thus we only gathered information about language, downloads, or prices. The most common operating system was Android. Of the analyzed apps, 30% (15/50) have both a free and paid version. MBMAs were devoted to daily meditation practice (27/46, 59%), mindfulness training (6/46, 13%), assessments or tests (5/46, 11%), attention focus (4/46, 9%), and mixed objectives (4/46, 9%). We found 108 different resources, of which the most used were reminders, alarms, or bells (21/108, 19.4%), statistics tools (17/108, 15.7%), audio tracks (15/108, 13.9%), and educational texts (11/108, 10.2%). Daily, weekly, monthly statistics, or reports were provided by 37% (17/46) of the apps. 28% (13/46) of them permitted access to a social network. No information about sensors was available. The analyzed applications seemed not to use any external sensor. English was the only language of 78% (39/50) of the apps, and only 8% (4/50) provided information in Spanish. 20% (9/46) of the apps have interfaces that are difficult to use. No specific apps exist for professionals or, at least, for both profiles (users and professionals). We did not find any evaluations of health outcomes resulting from the use of MBMAs.\n\n\nCONCLUSIONS\nWhile a wide selection of MBMAs seem to be available to interested people, this study still shows an almost complete lack of evidence supporting the usefulness of those applications. We found no randomized clinical trials evaluating the impact of these applications on mindfulness training or health indicators, and the potential for mobile mindfulness applications remains largely unexplored.",
"title": ""
}
] | [
{
"docid": "c9dea3f2c6a1c3adec1e77d76cd5a329",
"text": "With the widespread applications of deep convolutional neural networks (DCNNs), it becomes increasingly important for DCNNs not only to make accurate predictions but also to explain how they make their decisions. In this work, we propose a CHannel-wise disentangled InterPretation (CHIP) model to give the visual interpretation to the predictions of DCNNs. The proposed model distills the class-discriminative importance of channels in networks by utilizing the sparse regularization. Here, we first introduce the network perturbation technique to learn the model. The proposed model is capable to not only distill the global perspective knowledge from networks but also present the class-discriminative visual interpretation for specific predictions of networks. It is noteworthy that the proposed model is able to interpret different layers of networks without re-training. By combining the distilled interpretation knowledge in different layers, we further propose the Refined CHIP visual interpretation that is both high-resolution and class-discriminative. Experimental results on the standard dataset demonstrate that the proposed model provides promising visual interpretation for the predictions of networks in image classification task compared with existing visual interpretation methods. Besides, the proposed method outperforms related approaches in the application of ILSVRC 2015 weakly-supervised localization task.",
"title": ""
},
{
"docid": "b64602c81c036d30c9a6eec261d2e09f",
"text": "In this paper, we discuss the need for an effective representation of video data to aid analysis of large datasets of video clips and describe a prototype developed to explore the use of spatio-temporal interest points for action recognition. Our focus is on ways that computation can assist analysis.",
"title": ""
},
{
"docid": "1fe8a60595463038046be38b747565e3",
"text": "Recent WiFi standards use Channel State Information (CSI) feedback for better MIMO and rate adaptation. CSI provides detailed information about current channel conditions for different subcarriers and spatial streams. In this paper, we show that CSI feedback from a client to the AP can be used to recognize different fine-grained motions of the client. We find that CSI can not only identify if the client is in motion or not, but also classify different types of motions. To this end, we propose APsense, a framework that uses CSI to estimate the sensor patterns of the client. It is observed that client's sensor (e.g. accelerometer) values are correlated to CSI values available at the AP. We show that using simple machine learning classifiers, APsense can classify different motions with accuracy as high as 90%.",
"title": ""
},
{
"docid": "fa38b2d63562699af5200b5efa476f64",
"text": "Hashtags, originally introduced in Twitter, are now becoming the most used way to tag short messages in social networks since this facilitates subsequent search, classification and clustering over those messages. However, extracting information from hashtags is difficult because their composition is not constrained by any (linguistic) rule and they usually appear in short and poorly written messages which are difficult to analyze with classic IR techniques. In this paper we address two challenging problems regarding the “meaning of hashtags”— namely, hashtag relatedness and hashtag classification — and we provide two main contributions. First we build a novel graph upon hashtags and (Wikipedia) entities drawn from the tweets by means of topic annotators (such as TagME); this graph will allow us to model in an efficacious way not only classic co-occurrences but also semantic relatedness among hashtags and entities, or between entities themselves. Based on this graph, we design algorithms that significantly improve state-of-the-art results upon known publicly available datasets. The second contribution is the construction and the public release to the research community of two new datasets: the former is a new dataset for hashtag relatedness, the latter is a dataset for hashtag classification that is up to two orders of magnitude larger than the existing ones. These datasets will be used to show the robustness and efficacy of our approaches, showing improvements in F1 up to two-digits in percentage (absolute).",
"title": ""
},
{
"docid": "b1c1f9cdce2454508fc4a5c060dc1c57",
"text": "We present a reduced-order approach for robust, dynamic, and efficient bipedal locomotion control, culminating in 3D balancing and walking with ATRIAS, a heavily underactuated legged robot. These results are a development toward solving a number of enduring challenges in bipedal locomotion: achieving robust 3D gaits at various speeds and transitioning between them, all while minimally draining on-board energy supplies. Our reduced-order control methodology works by extracting and exploiting general dynamical behaviors from the spring-mass model of bipedal walking. When implemented on a robot with spring-mass passive dynamics, e.g. ATRIAS, this controller is sufficiently robust to balance while subjected to pushes, kicks, and successive dodgeball strikes. The controller further allowed smooth transitions between stepping in place and walking at a variety of speeds (up to 1.2 m/s). The resulting gait dynamics also match qualitatively to the reduced-order model, and additionally, measurements of human walking. We argue that the presented locomotion performance is compelling evidence of the effectiveness of the presented approach; both the control concepts and the practice of building robots with passive dynamics to accommodate them. INTRODUCTION We present 3D bipedal walking control for the dynamic bipedal robot, ATRIAS (Fig. 1), by building controllers on a foundation of insights from a reduced-order “spring-mass” math model. This work is aimed at tackling an enduring set of challenges in bipedal robotics: fast 3D locomotion that is efficient and robust to disturbances. Further, we want the ability to transition between gaits of different speeds, including slowing to and starting from zero velocity. This set of demands is challenging from a generalized formal control approach because of various inconvenient mathematical properties; legged systems are typically cast as nonlinear, hybrid-dynamical, and nonholonomic systems, which at the same time, because of the very nature of walking, require highly robust control algorithms. Bipedal robots are also increasingly becoming underactuated, i.e. a system with fewer actuators than degrees of freedom [1]. Underactuation is problematic for nonlinear control methods; as degrees of underactuation increase, handy techniques like feedback-linearization decline in the scope of their utility. 1 Copyright c © 2015 by ASME FIGURE 1: ATRIAS, A HUMAN-SCALE BIPEDAL “SPRINGMASS” ROBOT DESIGNED TO WALK AND RUN IN THREE DIMENSIONS. Whenever a robot does not have an actuated foot planted on rigid ground, it is effectively underactuated. As a result, the faster legged robots move and the rougher the terrain they encounter, it becomes increasingly impractical to avoid these underactuated domains. Further, there are compelling reasons, both mechanical and dynamical, for removing actuators from certain degrees of freedom (see more in the robot design section). With these facts in mind, our robotic platform is built to embody an underactuated and compliant “spring-mass” model (Fig. 2A), and our control reckons with the severe underactuation that results. ATRIAS has twelve degrees of freedom when walking, but just six actuators. However, by numerically analyzing the spring-mass model, we identify important targets and structures of control that can be regulated on the full-order robot which approximates it. We organize the remainder of this paper as follows. We begin by surveying existing control methods for 3D, underactuated, and spring-mass locomotion. 2) The design philosophy of our spring-mass robot, ATRIAS, and its implementation are briefly described. 3) We then build a controller incrementally from a 1D idealized model, to a 3D model, to the full 12-degree-of-freedom robot. 4) We show that this controller can regulate speeds ranging from 0 m/s to 1.2 m/s and transition between them. 5) With a set of perturbation experiments, we demonstrate the robustness of the controller and 6) argue in our conclusions for the thoughtful cooperation between the tasks of robot design and control. (A) (B) Virtual leg Vi rtu al leg dir ec tio n spring-mass k m FIGURE 2: THE DESIGN PHILOSOPHY OF ATRIAS, WHICH MAXIMALLY EMBODIES THE “SPRING-MASS” MODEL OF WALKING AND RUNNING. A) THE SPRING MASS MODEL WITH A POINT MASS BODY AND MASSLESS LEG SPRING. B) ATRIAS WITH A VIRTUAL LEG SPRING OVERLAID.",
"title": ""
},
{
"docid": "324dc3f410eb89f096dd72bffe9616bc",
"text": "The use of the Internet by older adults is growing at a substantial rate. They are becoming an increasingly important potential market for electronic commerce. However, previous researchers and practitioners have focused mainly on the youth market and paid less attention to issues related to the online behaviors of older consumers. To bridge the gap, the purpose of this study is to increase a better understanding of the drivers and barriers affecting older consumers’ intention to shop online. To this end, this study is developed by integrating the Unified Theory of Acceptance and Use of Technology (UTAUT) and innovation resistance theory. By comparing younger consumers with their older counterparts, in terms of gender the findings indicate that the major factors driving older adults toward online shopping are performance expectation and social influence which is the same with younger. On the other hand, the major barriers include value, risk, and tradition which is different from younger. Consequently, it is notable that older adults show no gender differences in regards to the drivers and barriers. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "15ec9bfa4c3a989fb67dce4f1fb172c5",
"text": "This paper proposes ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA. Binary neural networks offer an intriguing opportunity for deploying large-scale deep learning models on resource-constrained devices. Binarization reduces the memory footprint and replaces the power-hungry matrix-multiplication with light-weight XnorPopcount operations. However, binary networks suffer from a degraded accuracy compared to their fixed-point counterparts. We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity. To compensate for the degraded accuracy while adhering to the simplicity of binary networks, we devise the first reconfigurable scheme that can adjust the classification accuracy based on the application. Our proposition improves the classification accuracy by representing features with multiple levels of residual binarization. Unlike previous methods, our approach does not exacerbate the area cost of the hardware accelerator. Instead, it provides a tradeoff between throughput and accuracy while the area overhead of multi-level binarization is negligible.",
"title": ""
},
{
"docid": "e1a2dc853f96f5b01fe89e5462bdcb52",
"text": "Natural language generation from visual inputs has attracted extensive research attention recently. Generating poetry from visual content is an interesting but very challenging task. We propose and address the new multimedia task of generating classical Chinese poetry from image streams. In this paper, we propose an Images2Poem model with a selection mechanism and an adaptive self-attention mechanism for the problem. The model first selects representative images to summarize the image stream. During decoding, it adaptively pays attention to the information from either source-side image stream or target-side previously generated characters. It jointly summarizes the images and generates relevant, high-quality poetry from image streams. Experimental results demonstrate the effectiveness of the proposed approach. Our model outperforms baselines in different human evaluation metrics.",
"title": ""
},
{
"docid": "0adb426bb2144baa149a3c1e97db55ee",
"text": "Chatbots have drawn significant attention of late in both industry and academia. For most task completion bots in the industry, human intervention is the only means of avoiding mistakes in complex real-world cases. However, to the best of our knowledge, there is no existing research work modeling the collaboration between task completion bots and human workers. In this paper, we introduce CoChat, a dialog management framework to enable effective collaboration between bots and human workers. In CoChat, human workers can introduce new actions at any time to handle previously unseen cases. We propose a memory-enhanced hierarchical RNN (MemHRNN) to handle the one-shot learning challenges caused by instantly introducing new actions in CoChat. Extensive experiments on real-world datasets well demonstrate that CoChat can relieve most of the human workers’ workload, and get better user satisfaction rates comparing to other state-of-the-art frameworks.",
"title": ""
},
{
"docid": "210e26d5d11582be68337a0cc387ab8e",
"text": "This paper presents the results of experiments carried out with the goal of applying the machine learning techniques of reinforcement learning and neural networks with reinforcement learning to the game of Tetris. Tetris is a well-known computer game that can be played either by a single player or competitively with slight variations, toward the end of accumulating a high score or defeating the opponent. The fundamental hypothesis of this paper is that if the points earned in Tetris are used as the reward function for a machine learning agent, then that agent should be able to learn to play Tetris without other supervision. Toward this end, a state-space that summarizes the essential feature of the Tetris board is designed, high-level actions are developed to interact with the game, and agents are trained using Q-Learning and neural networks. As a result of these efforts, agents learn to play Tetris and to compete with other players. While the learning agents fail to accumulate as many points as the most advanced AI agents, they do learn to play more efficiently.",
"title": ""
},
{
"docid": "d9366c0456eedecd396a9aa1dbc31e35",
"text": "A connectionist model is presented, the TraceLink model, that implements an autonomous \"off-line\" consolidation process. The model consists of three subsystems: (1) a trace system (neocortex), (2) a link system (hippocampus and adjacent regions), and (3) a modulatory system (basal forebrain and other areas). The model is able to account for many of the characteristics of anterograde and retrograde amnesia, including Ribot gradients, transient global amnesia, patterns of shrinkage of retrograde amnesia, and correlations between anterograde and retrograde amnesia or the absence thereof (e.g., in isolated retrograde amnesia). In addition, it produces normal forgetting curves and can exhibit permastore. It also offers an explanation for the advantages of learning under high arousal for long-term retention.",
"title": ""
},
{
"docid": "45de40eb5661ff0f44392e255c45f646",
"text": "Cloud computing is a new computing paradigm that is gaining increased popularity. More and more sensitive user data are stored in the cloud. The privacy of users’ access pattern to the data should be protected to prevent un-trusted cloud servers from inferring users’ private information or launching stealthy attacks. Meanwhile, the privacy protection schemes should be efficient as cloud users often use thin client devices to access the cloud. In this paper, we propose a lightweight scheme to protect the privacy of data access pattern. Comparing with existing state-of-the-art solutions, our scheme incurs less communication and computational overhead, requires significantly less storage space at the cloud user, while consuming similar storage space at the cloud server. Rigorous proofs and extensive evaluations have been conducted to demonstrate that the proposed scheme can hide the data access pattern effectively in the long run after a reasonable number of accesses have been made.",
"title": ""
},
{
"docid": "01d85de1c78a6f7eb5b65dacac29baf8",
"text": "A chatbot is a conventional agent that is able to interact with users in a given subject by using natural language. The conversations in most chatbot are still using a keyboard as the input. Keyboard input is considered ineffective as the conversation is not natural without any saying and a conversation is not just about words. Therefore, this paper propose a design of a chatbot with avatar and voice interaction to make a conversation more alive. This proposed approach method will come from using several API and using its output as another input to next API. It would take speech recognition to take input from user, then proceed it to chatbot API to receive the chatbot reply in a text form. The reply will be processed to text-to-speech recognition and created a spoken, audio version of the reply. Last, the computer will render an avatar whose gesture and lips are sync with the audio reply. This design would make every customer service or any service with human interaction can use it to make interaction more natural. This design can be further explored with additional tool such as web camera to make the agent can analyze the user's emotion and reaction.",
"title": ""
},
{
"docid": "ea5b41179508151987a1f6e6d154d7a6",
"text": "Despite the considerable quantity of research directed towards multitouch technologies, a set of standardized UI components have not been developed. Menu systems provide a particular challenge, as traditional GUI menus require a level of pointing precision inappropriate for direct finger input. Marking menus are a promising alternative, but have yet to be investigated or adapted for use within multitouch systems. In this paper, we first investigate the human capabilities for performing directional chording gestures, to assess the feasibility of multitouch marking menus. Based on the positive results collected from this study, and in particular, high angular accuracy, we discuss our new multitouch marking menu design, which can increase the number of items in a menu, and eliminate a level of depth. A second experiment showed that multitouch marking menus perform significantly faster than traditional hierarchal marking menus, reducing acquisition times in both novice and expert usage modalities.",
"title": ""
},
{
"docid": "87993df44973bd83724baace13ea1aa7",
"text": "OBJECTIVE\nThe objective of this research was to determine the relative impairment associated with conversing on a cellular telephone while driving.\n\n\nBACKGROUND\nEpidemiological evidence suggests that the relative risk of being in a traffic accident while using a cell phone is similar to the hazard associated with driving with a blood alcohol level at the legal limit. The purpose of this research was to provide a direct comparison of the driving performance of a cell phone driver and a drunk driver in a controlled laboratory setting.\n\n\nMETHOD\nWe used a high-fidelity driving simulator to compare the performance of cell phone drivers with drivers who were intoxicated from ethanol (i.e., blood alcohol concentration at 0.08% weight/volume).\n\n\nRESULTS\nWhen drivers were conversing on either a handheld or hands-free cell phone, their braking reactions were delayed and they were involved in more traffic accidents than when they were not conversing on a cell phone. By contrast, when drivers were intoxicated from ethanol they exhibited a more aggressive driving style, following closer to the vehicle immediately in front of them and applying more force while braking.\n\n\nCONCLUSION\nWhen driving conditions and time on task were controlled for, the impairments associated with using a cell phone while driving can be as profound as those associated with driving while drunk.\n\n\nAPPLICATION\nThis research may help to provide guidance for regulation addressing driver distraction caused by cell phone conversations.",
"title": ""
},
{
"docid": "f1c4577a013e313d3a0bfdd1f5c9981e",
"text": "In this work, a simple and compact transition from substrate integrated waveguide (SIW) to traditional rectangular waveguide is proposed and demonstrated. The substrate of SIW can be easily surface-mounted to the standard flange of the waveguide by creating a flange on the substrate. A longitudinal slot window etched on the broad wall of SIW couples energy between SIW and rectangular waveguide. An example of the transition structure is realized at 35 GHz with substrate of RT/Duroid 5880. HFSS simulated result of the transition shows a return loss less than −15 dB over a frequency range of 800 MHz. A back to back connected transition has been fabricated, and the measured results confirm well with the anticipated ones.",
"title": ""
},
{
"docid": "bab7a21f903157fcd0d3e70da4e7261a",
"text": "The clinical, electrophysiological and morphological findings (light and electron microscopy of the sural nerve and gastrocnemius muscle) are reported in an unusual case of Guillain-Barré polyneuropathy with an association of muscle hypertrophy and a syndrome of continuous motor unit activity. Fasciculation, muscle stiffness, cramps, myokymia, impaired muscle relaxation and percussion myotonia, with their electromyographic accompaniments, were abolished by peripheral nerve blocking, carbamazepine, valproic acid or prednisone therapy. Muscle hypertrophy, which was confirmed by morphometric data, diminished 2 months after the beginning of prednisone therapy. Electrophysiological and nerve biopsy findings revealed a mixed process of axonal degeneration and segmental demyelination. Muscle biopsy specimen showed a marked predominance and hypertrophy of type-I fibres and atrophy, especially of type-II fibres.",
"title": ""
},
{
"docid": "4df52d891c63975a1b9d4cd6c74571db",
"text": "DDoS attacks have been a persistent threat to network availability for many years. Most of the existing mitigation techniques attempt to protect against DDoS by filtering out attack traffic. However, as critical network resources are usually static, adversaries are able to bypass filtering by sending stealthy low traffic from large number of bots that mimic benign traffic behavior. Sophisticated stealthy attacks on critical links can cause a devastating effect such as partitioning domains and networks. In this paper, we propose to defend against DDoS attacks by proactively changing the footprint of critical resources in an unpredictable fashion to invalidate an adversary's knowledge and plan of attack against critical network resources. Our present approach employs virtual networks (VNs) to dynamically reallocate network resources using VN placement and offers constant VN migration to new resources. Our approach has two components: (1) a correct-by-construction VN migration planning that significantly increases the uncertainty about critical links of multiple VNs while preserving the VN placement properties, and (2) an efficient VN migration mechanism that identifies the appropriate configuration sequence to enable node migration while maintaining the network integrity (e.g., avoiding session disconnection). We formulate and implement this framework using SMT logic. We also demonstrate the effectiveness of our implemented framework on both PlanetLab and Mininet-based experimentations.",
"title": ""
},
{
"docid": "6f13503bf65ff58b7f0d4f3282f60dec",
"text": "Body centric wireless communication is now accepted as an important part of 4th generation (and beyond) mobile communications systems, taking the form of human to human networking incorporating wearable sensors and communications. There are also a number of body centric communication systems for specialized occupations, such as paramedics and fire-fighters, military personnel and medical sensing and support. To support these developments there is considerable ongoing research into antennas and propagation for body centric communications systems, and this paper will summarise some of it, including the characterisation of the channel on the body, the optimisation of antennas for these channels, and communications to medical implants where advanced antenna design and characterisation and modelling of the internal body channel are important research needs. In all of these areas both measurement and simulation pose very different and challenging issues to be faced by the researcher.",
"title": ""
},
{
"docid": "c68196f826f2afb61c13a0399d921421",
"text": "BACKGROUND\nIndividuals with mild cognitive impairment (MCI) have a substantially increased risk of developing dementia due to Alzheimer's disease (AD). In this study, we developed a multivariate prognostic model for predicting MCI-to-dementia progression at the individual patient level.\n\n\nMETHODS\nUsing baseline data from 259 MCI patients and a probabilistic, kernel-based pattern classification approach, we trained a classifier to distinguish between patients who progressed to AD-type dementia (n = 139) and those who did not (n = 120) during a three-year follow-up period. More than 750 variables across four data sources were considered as potential predictors of progression. These data sources included risk factors, cognitive and functional assessments, structural magnetic resonance imaging (MRI) data, and plasma proteomic data. Predictive utility was assessed using a rigorous cross-validation framework.\n\n\nRESULTS\nCognitive and functional markers were most predictive of progression, while plasma proteomic markers had limited predictive utility. The best performing model incorporated a combination of cognitive/functional markers and morphometric MRI measures and predicted progression with 80% accuracy (83% sensitivity, 76% specificity, AUC = 0.87). Predictors of progression included scores on the Alzheimer's Disease Assessment Scale, Rey Auditory Verbal Learning Test, and Functional Activities Questionnaire, as well as volume/cortical thickness of three brain regions (left hippocampus, middle temporal gyrus, and inferior parietal cortex). Calibration analysis revealed that the model is capable of generating probabilistic predictions that reliably reflect the actual risk of progression. Finally, we found that the predictive accuracy of the model varied with patient demographic, genetic, and clinical characteristics and could be further improved by taking into account the confidence of the predictions.\n\n\nCONCLUSIONS\nWe developed an accurate prognostic model for predicting MCI-to-dementia progression over a three-year period. The model utilizes widely available, cost-effective, non-invasive markers and can be used to improve patient selection in clinical trials and identify high-risk MCI patients for early treatment.",
"title": ""
}
] | scidocsrr |
093282edea65cc5ce4e7a88347b5eab5 | Partial Fingerprint Matching through Region-Based Similarity | [
{
"docid": "3160df3c3e64635f36a50c8d7fd27f8c",
"text": "In this paper, we introduce the Minutia Cylinder-Code (MCC): a novel representation based on 3D data structures (called cylinders), built from minutiae distances and angles. The cylinders can be created starting from a subset of the mandatory features (minutiae position and direction) defined by standards like ISO/IEC 19794-2 (2005). Thanks to the cylinder invariance, fixed-length, and bit-oriented coding, some simple but very effective metrics can be defined to compute local similarities and to consolidate them into a global score. Extensive experiments over FVC2006 databases prove the superiority of MCC with respect to three well-known techniques and demonstrate the feasibility of obtaining a very effective (and interoperable) fingerprint recognition implementation for light architectures.",
"title": ""
}
] | [
{
"docid": "137287318bc2a50feeb026add3f58a43",
"text": "BACKGROUND\nThe use of bioactive proteins, such as rhBMP-2, may improve bone regeneration in oral and maxillofacial surgery.\n\n\nPURPOSE\nAnalyze the effect of using bioactive proteins for bone regeneration in implant-based rehabilitation.\n\n\nMATERIALS AND METHODS\nSeven databases were screened. Only clinical trials that evaluated the use of heterologous sources of bioactive proteins for bone formation prior to implant-based rehabilitation were included. Statistical analyses were carried out using a random-effects model by comparing the standardized mean difference between groups for bone formation, and risk ratio for implant survival (P ≤ .05).\n\n\nRESULTS\nSeventeen studies were included in the qualitative analysis, and 16 in the meta-analysis. For sinus floor augmentation, bone grafts showed higher amounts of residual bone graft particles than bioactive treatments (P ≤ .05). While for alveolar ridge augmentation bioactive treatments showed a higher level of bone formation than control groups (P ≤ .05). At 3 years of follow-up, no statistically significant differences were observed for implant survival (P > .05).\n\n\nCONCLUSIONS\nBioactive proteins may improve bone formation in alveolar ridge augmentation, and reduce residual bone grafts in sinus floor augmentation. Further studies are needed to evaluate the long-term effect of using bioactive treatments for implant-based rehabilitation.",
"title": ""
},
{
"docid": "ff7b8957aeedc0805f972bf5bd6923f0",
"text": "This study was designed to test the Fundamental Difference Hypothesis (Bley-Vroman, 1988), which states that, whereas children are known to learn language almost completely through (implicit) domain-specific mechanisms, adults have largely lost the ability to learn a language without reflecting on its structure and have to use alternative mechanisms, drawing especially on their problem-solving capacities, to learn a second language. The hypothesis implies that only adults with a high level of verbal analytical ability will reach near-native competence in their second language, but that this ability will not be a significant predictor of success for childhood second language acquisition. A study with 57 adult Hungarian-speaking immigrants confirmed the hypothesis in the sense that very few adult immigrants scored within the range of child arrivals on a grammaticality judgment test, and that the few who did had high levels of verbal analytical ability; this ability was not a significant predictor for childhood arrivals. This study replicates the findings of Johnson and Newport (1989) and provides an explanation for the apparent exceptions in their study. These findings lead to a reconceptualization of the Critical Period Hypothesis: If the scope of this hypothesis is lim-",
"title": ""
},
{
"docid": "cb7dda8f4059e5a66e4a6e26fcda601e",
"text": "Purpose – This UK-based research aims to build on the US-based work of Keller and Aaker, which found a significant association between “company credibility” (via a brand’s “expertise” and “trustworthiness”) and brand extension acceptance, hypothesising that brand trust, measured via two correlate dimensions, is significantly related to brand extension acceptance. Design/methodology/approach – Discusses brand extension and various prior, validated influences on its success. Focuses on the construct of trust and develops hypotheses about the relationship of brand trust with brand extension acceptance. The hypotheses are then tested on data collected from consumers in the UK. Findings – This paper, using 368 consumer responses to nine, real, low involvement UK product and service brands, finds support for a significant association between the variables, comparable in strength with that between media weight and brand share, and greater than that delivered by the perceived quality level of the parent brand. Originality/value – The research findings, which develop a sparse literature in this linkage area, are of significance to marketing practitioners, since brand trust, already associated with brand equity and brand loyalty, and now with brand extension, needs to be managed and monitored with care. The paper prompts further investigation of the relationship between brand trust and brand extension acceptance in other geographic markets and with other higher involvement categories.",
"title": ""
},
{
"docid": "3f9a46f472ab276c39fb96b78df132ee",
"text": "In this paper, we present a novel technique that enables capturing of detailed 3D models from flash photographs integrating shading and silhouette cues. Our main contribution is an optimization framework which not only captures subtle surface details but also handles changes in topology. To incorporate normals estimated from shading, we employ a mesh-based deformable model using deformation gradient. This method is capable of manipulating precise geometry and, in fact, it outperforms previous methods in terms of both accuracy and efficiency. To adapt the topology of the mesh, we convert the mesh into an implicit surface representation and then back to a mesh representation. This simple procedure removes self-intersecting regions of the mesh and solves the topology problem effectively. In addition to the algorithm, we introduce a hand-held setup to achieve multi-view photometric stereo. The key idea is to acquire flash photographs from a wide range of positions in order to obtain a sufficient lighting variation even with a standard flash unit attached to the camera. Experimental results showed that our method can capture detailed shapes of various objects and cope with topology changes well.",
"title": ""
},
{
"docid": "b7521521277f944a9532dc4435a2bda7",
"text": "The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications in how we design, develop, deploy and use networks and applications. The NDN design and development has attracted significant attention from the networking community. To facilitate broader participation in addressing NDN research and development challenges, this tutorial will describe the vision of this new architecture and its basic components and operations.",
"title": ""
},
{
"docid": "b47d53485704f4237e57d220640346a7",
"text": "Features of consciousness difficult to understand in terms of conventional neuroscience have evoked application of quantum theory, which describes the fundamental behavior of matter and energy. In this paper we propose that aspects of quantum theory (e.g. quantum coherence) and of a newly proposed physical phenomenon of quantum wave function \"self-collapse\" (objective reduction: OR Penrose, 1994) are essential for consciousness, and occur in cytoskeletal microtubules and other structures within each of the brain's neurons. The particular characteristics of microtubules suitable for quantum effects include their crystal-like lattice structure, hollow inner core, organization of cell function and capacity for information processing. We envisage that conformational states of microtubule subunits (tubulins) are coupled to internal quantum events, and cooperatively interact (compute) with other tubulins. We further assume that macroscopic coherent superposition of quantum-coupled tubulin conformational states occurs throughout significant brain volumes and provides the global binding essential to consciousness. We equate the emergence of the microtubule quantum coherence with pre-conscious processing which grows (for up to 500 ms) until the mass energy difference among the separated states of tubulins reaches a threshold related to quantum gravity. According to the arguments for OR put forth in Penrose (1994), superpositioned states each have their own space-time geometries. When the degree of coherent mass energy difference leads to sufficient separation of space time geometry, the system must choose and decay (reduce, collapse) to a single universe state. In this way, a transient superposition of slightly differing space-time geometries persists until an abrupt quantum --, classical reduction occurs. Unlike the random, \"subjective reduction\" (SR, or R) of standard quantum theory caused by observation or environmental entanglement, the OR we propose in microtubules is a se(f-collapse and it results in particular patterns of microtubule-tubulin conformational states that regulate neuronal activities including synaptic functions. Possibilities and probabilities for post-reduction tubulin states are influenced by factors including attachments of microtubule-associated proteins (MAPs) acting as \"nodes\" which tune and \"orchestrate\" the quantum oscillations. We thus term the self-tuning OR process in microtubules \"orchestrated objective reduction\" (\"Orch OR\"), and calculate an estimate for the number of tubulins (and neurons) whose coherence for relevant time periods (e.g. 500ms) will elicit Orch OR. In providing a connection among (1) pre-conscious to conscious transition, (2) fundamental space time notions, (3) non-computability, and (4) binding of various (time scale and spatial) reductions into an instantaneous event (\"conscious now\"), we believe Orch OR in brain microtubules is the most specific and plausible model for consciousness yet proposed. * Corresponding author. Tel.: (520) 626-2116. Fax: (520) 626-2689. E-Mail: srh(cv ccit.arizona.edu. 0378-4754/96/$15.00 © 1996 Elsevier Science B.V. All rights reserved SSDI0378-4754(95 ) 0049-6 454 S. Hameroff, R. Penrose/Mathematics and Computers in Simulation 40 (1996) 453 480",
"title": ""
},
{
"docid": "03dc23b2556e21af9424500e267612bb",
"text": "File fragment classification is an important and difficult problem in digital forensics. Previous works in this area mainly relied on specific byte sequences in file headers and footers, or statistical analysis and machine learning algorithms on data from the middle of the file. This paper introduces a new approach to classify file fragment based on grayscale image. The proposed method treats a file fragment as a grayscale image, and uses image classification method to classify file fragment. Furthermore, two models based on file-unbiased and type-unbiased are proposed to verify the validity of the proposed method. Compared with previous works, the experimental results are promising. An average classification accuracy of 39.7% in file-unbiased model and 54.7% in type-unbiased model are achieved on 29 file types.",
"title": ""
},
{
"docid": "ba966c2fc67b88d26a3030763d56ed1a",
"text": "Design of a long read-range, reconfigurable operating frequency radio frequency identification (RFID) metal tag is proposed in this paper. The antenna structure consists of two nonconnected load bars and two bowtie patches electrically connected through four pairs of vias to a conducting backplane to form a looped-bowtie RFID tag antenna that is suitable for mounting on metallic objects. The design offers more degrees of freedom to tune the input impedance of the proposed antenna. The load bars, which have a cutoff point on each bar, can be used to reconfigure the operating frequency of the tag by exciting any one of the three possible frequency modes; hence, this tag can be used worldwide for the UHF RFID frequency band. Experimental tests show that the maximum read range of the prototype, placed on a metallic object, are found to be 3.0, 3.2, and 3.3 m, respectively, for the three operating modes, which has been tested for an RFID reader with only 0.4 W error interrupt pending register (EIPR). The paper shows that the simulated and measured results are in good agreement with each other.",
"title": ""
},
{
"docid": "dae40fa32526bf965bad70f98eb51bb7",
"text": "Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate. The weight pruning results are very promising and consistently outperform the prior work. On the LeNet-5 model for the MNIST data set, we achieve 71.2× weight reduction without accuracy loss. On the AlexNet model for the ImageNet data set, we achieve 21× weight reduction without accuracy loss. When we focus on the convolutional layer pruning for computation reductions, we can reduce the total computation by five times compared with the prior work (achieving a total of 13.4× weight reduction in convolutional layers). Our models and codes are released at https://github.com/KaiqiZhang/admm-pruning.",
"title": ""
},
{
"docid": "bffcc580fa868d4c0b05742997caa55a",
"text": "In this paper, we propose a probabilistic model for detecting relevant changes in registered aerial image pairs taken with the time differences of several years and in different seasonal conditions. The introduced approach, called the conditional mixed Markov model, is a combination of a mixed Markov model and a conditionally independent random field of signals. The model integrates global intensity statistics with local correlation and contrast features. A global energy optimization process ensures simultaneously optimal local feature selection and smooth observation-consistent segmentation. Validation is given on real aerial image sets provided by the Hungarian Institute of Geodesy, Cartography and Remote Sensing and Google Earth.",
"title": ""
},
{
"docid": "bbf987eef74d76cf2916ae3080a2b174",
"text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.",
"title": ""
},
{
"docid": "b87920c111fa8e4233a537aee8f0c027",
"text": "Mobile robots are increasingly being developed for highrisk missions in rough terrain situations, such as planetary exploration. Here a rough-terrain control (RTC) methodology is presented that exploits the actuator redundancy found in multi-wheeled mobile robot systems to improve ground traction and reduce power consumption. The methodology “chooses” an optimization criterion based on the local terrain profile. A key element of the method is to be able to estimate the wheelground contact angles. A method using an extended Kalman filter is presented for estimating these angles using simple onboard sensors. Simulation results for a wheeled micro-rover traversing Mars-like terrain demonstrate the effectiveness of the algorithms. INTRODUCTION Mobile robots are increasingly being developed for highrisk missions in rough terrain environments. One successful example is the NASA/JPL Sojourner Martian rover (Golombek, 1998). Future planetary missions will require mobile robots to perform difficult tasks in more challenging terrain than encountered by Sojourner (Hayati et al., 1996; Schenker, et al. 1997). Other examples of rough terrain applications for robotic systems can be found in the forestry and mining industries, and in hazardous material handling applications, such as the Chernobyl disaster site clean-up (Cunningham et. al., 1998; Gonthier and Papadopolous,1998; Osborn, 1989). In rough terrain, it is critical for mobile robots to maintain good wheel traction. Wheel slip could cause the rover to lose control and become trapped. Substantial work has been done on traction control of passenger vehicles on flat roads (Kawabe et al. , 1997). This work is not applicable to low-speed, rough terrain rovers because in these vehicles wheel slip is caused primarily by kinematic incompatibility or loose soil conditions, rather than “breakaway” wheel acceleration. Traction control for low-speed mobile robots on flat terrain has been studied (Reister and Unseren, 1993). Later work has considered the important effects of terrain unevenness on traction control (Sreenivasan and Wilcox, 1996). This work assumes knowledge of terrain geometry and soil characteristics. However, in such applications as planetary exploration this information is usually unknown. A fuzzy-logic traction control algorithm for a rocker-bogie rover that did not assume knowledge of terrain geometry has been developed (Hacot, 1998). This approach is based on heuristic rules related to vehicle mechanics. Knowledge of terrain information is critical to the traction control problem. An key variable for traction algorithms is the contact angles between the vehicle wheels and the ground (Sreenivasan and Wilcox, 1994; Farritor et al., 1998). Measuring this angle physically is difficult. Researchers have proposed installing multi-axis force sensors at each wheel to measure the contact force direction, and inferring the groundcontact angle from the force data (Sreenivasan and Wilcox, 1994). However, wheel-hub mounted multi-axis force sensors would be costly and complex. Complexity reduces reliability and adds weight, two factors that carry severe penalties for planetary exploration applications. This paper presents a control methodology for vehicles with redundant drive wheels for improved traction or reduced power consumption. In highly uneven terrain, traction is optimized. In relatively flat terrain, power consumption is minimized. A method is presented for estimating wheelground contact angles of mobile robots using simple on-board sensors. The algorithm is based on rigid-body kinematic equations and uses sensors such as vehicle inclinometers and wheel tachometers. It does not require the use of force sensors. The method uses an extended Kalman filter to fuse noisy sensor signals. Simulation results are presented for a planar two-wheeled rover on uneven Mars-like soil. It is shown that the wheelground contact angle estimation method can accurately estimate contact angles in the presence of sensor noise and wheel slip. It is also shown that the rough-terrain control (RTC) method leads to increased traction and improved power consumption as compared to traditional individual-wheel velocity control.",
"title": ""
},
{
"docid": "720eccb945faa357bc44c5aa33fe60a9",
"text": "The evolution of an arm exoskeleton design for treating shoulder pathology is examined. Tradeoffs between various kinematics configurations are explored, and a device with five active degrees of freedom is proposed. Two rapid-prototype designs were built and fitted to several subjects to verify the kinematic design and determine passive link adjustments. Control modes are developed for exercise therapy and functional rehabilitation, and a distributed software architecture that incorporates computer safety monitoring is described. Although intended primarily for therapy, the exoskeleton is also used to monitor progress in strength, range of motion, and functional task performance",
"title": ""
},
{
"docid": "0da299fb53db5980a10e0ae8699d2209",
"text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.",
"title": ""
},
{
"docid": "04956fbf44b2a1d7164325fc395c019a",
"text": "The ever-growing number of people using Twitter makes it a valuable source of timely information. However, detecting events in Twitter is a difficult task, because tweets that report interesting events are overwhelmed by a large volume of tweets on unrelated topics. Existing methods focus on the textual content of tweets and ignore the social aspect of Twitter. In this paper, we propose mention-anomaly-based event detection (MABED), a novel statistical method that relies solely on tweets and leverages the creation frequency of dynamic links (i.e., mentions) that users insert in tweets to detect significant events and estimate the magnitude of their impact over the crowd. MABED also differs from the literature in that it dynamically estimates the period of time during which each event is discussed, rather than assuming a predefined fixed duration for all events. The experiments we conducted on both English and French Twitter data show that the mention-anomaly-based approach leads to more accurate event detection and improved robustness in presence of noisy Twitter content. Qualitatively speaking, we find that MABED helps with the interpretation of detected events by providing clear textual descriptions and precise temporal descriptions. We also show how MABED can help understanding users’ interest. Furthermore, we describe three visualizations designed to favor an efficient exploration of the detected events.",
"title": ""
},
{
"docid": "b0d3388bc02f8ee55a8575de6253f5fb",
"text": "Today’s rapid changing and competitive environment requires educators to stay abreast of the job market in order to prepare their students for the jobs being demanded. This is more relevant about Information Technology (IT) jobs than others. However, to stay abreast of the market job demands require retrieving, sifting and analyzing large volume of data in order to understand the trends of the job market. Traditional methods of data collection and analysis are not sufficient for this kind of analysis due to the large volume of job data that is generated through the web and elsewhere. Luckily, the field of data mining has emerged to collect and sift through such large data volumes. However, even with data mining, appropriate data collection techniques and analysis need to be followed in order to correctly understand the trend. This paper illustrates our experience with employing mining techniques to understand the trend in IT Technology jobs. Data was collect using data mining techniques over a number of years from an online job agency. The data was then analyzed to reach a conclusion about the trends in the job market. Our experience in this regard along with literature review of the relevant topics is illustrated in this paper.",
"title": ""
},
{
"docid": "bf82fadedef61212cda85311a712560e",
"text": "The extensive growth of the Internet of Things (IoT) is providing direction towards the smart urban. The smart urban is favored because it improves the standard of living of the citizens and provides excellence in the community services. The services may include but not limited to health, parking, transport, water, environment, power, and so forth. The diverse and heterogeneous environment of IoT and smart urban is challenged by real-time data processing and decision-making. In this research article, we propose IoT based smart urban architecture using Big Data analytics. The proposed architecture is divided into three different tiers: (1) data acquisition and aggregation, (2) data computation and processing, and (3) decision making and application. The proposed architecture is implemented and validated on Hadoop Ecosystem using reliable and authentic datasets. The research shows that the proposed system presents valuable imminent into the community development systems to get better the existing smart urban architecture.",
"title": ""
},
{
"docid": "1f0926abdff68050ef88eea49adaf382",
"text": "Words are the essence of communication: They are the building blocks of any language. Learning the meaning of words is thus one of the most important aspects of language acquisition: Children must first learn words before they can combine them into complex utterances. Many theories have been developed to explain the impressive efficiency of young children in acquiring the vocabulary of their language, as well as the developmental patterns observed in the course of lexical acquisition. A major source of disagreement among the different theories is whether children are equipped with special mechanisms and biases for word learning, or their general cognitive abilities are adequate for the task. We present a novel computational model of early word learning to shed light on the mechanisms that might be at work in this process. The model learns word meanings as probabilistic associations between words and semantic elements, using an incremental and probabilistic learning mechanism, and drawing only on general cognitive abilities. The results presented here demonstrate that much about word meanings can be learned from naturally occurring child-directed utterances (paired with meaning representations), without using any special biases or constraints, and without any explicit developmental changes in the underlying learning mechanism. Furthermore, our model provides explanations for the occasionally contradictory child experimental data, and offers predictions for the behavior of young word learners in novel situations.",
"title": ""
},
{
"docid": "9e8c61584bbbda83c73a4cb2f74f8d37",
"text": "Internet addiction (IA) has become a widespread and problematic phenomenon. Little is known about the effect of internet addiction (IA). The present study focus on the Meta analysis of internet addiction and its relation to mental health among youth. Effect size estimated the difference between the gender with respect to the severity of internet addiction and the depression, anxiety, social isolation and sleep pattern positive.",
"title": ""
}
] | scidocsrr |
d0e679aae451c58682d22f36c93afdc1 | CMU OAQA at TREC 2016 LiveQA: An Attentional Neural Encoder-Decoder Approach for Answer Ranking | [
{
"docid": "d29634888a4f1cee1ed613b0f038ddb3",
"text": "This work investigates the use of linguistically motivated features to improve search, in particular for ranking answers to non-factoid questions. We show that it is possible to exploit existing large collections of question–answer pairs (from online social Question Answering sites) to extract such features and train ranking models which combine them effectively. We investigate a wide range of feature types, some exploiting natural language processing such as coarse word sense disambiguation, named-entity identification, syntactic parsing, and semantic role labeling. Our experiments demonstrate that linguistic features, in combination, yield considerable improvements in accuracy. Depending on the system settings we measure relative improvements of 14% to 21% in Mean Reciprocal Rank and Precision@1, providing one of the most compelling evidence to date that complex linguistic features such as word senses and semantic roles can have a significant impact on large-scale information retrieval tasks.",
"title": ""
}
] | [
{
"docid": "c020a3ba9a2615cb5ed9a7e9d5aa3ce0",
"text": "Neural network approaches to Named-Entity Recognition reduce the need for carefully handcrafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a lowdimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute — offline — a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.",
"title": ""
},
{
"docid": "93ec0a392a7a29312778c6834ffada73",
"text": "BACKGROUND\nThe new world of safe aesthetic injectables has become increasingly popular with patients. Not only is there less risk than with surgery, but there is also significantly less downtime to interfere with patients' normal work and social schedules. Botulinum toxin (BoNT) type A (BoNTA) is an indispensable tool used in aesthetic medicine, and its broad appeal has made it a hallmark of modern culture. The key to using BoNTA to its best effect is to understand patient-specific factors that will determine the treatment plan and the physician's ability to personalize injection strategies.\n\n\nOBJECTIVES\nTo present international expert viewpoints and consensus on some of the contemporary best practices in aesthetic BoNTA, so that beginner and advanced injectors may find pearls that provide practical benefits.\n\n\nMETHODS AND MATERIALS\nExpert aesthetic physicians convened to discuss their approaches to treatment with BoNT. The discussions and consensus from this meeting were used to provide an up-to-date review of treatment strategies to improve patient results. Information is presented on patient management and assessment, documentation and consent, aesthetic scales, injection strategies, dilution, dosing, and adverse events.\n\n\nCONCLUSION\nA range of product- and patient-specific factors influence the treatment plan. Truly optimized outcomes are possible only when the treating physician has the requisite knowledge, experience, and vision to use BoNTA as part of a unique solution for each patient's specific needs.",
"title": ""
},
{
"docid": "bdfa9a484a2bca304c0a8bbd6dcd7f1a",
"text": "We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.",
"title": ""
},
{
"docid": "ac6e2ecb17757c8d4048c4ac09add80f",
"text": "Purpose – To examine issues of standardization and adaptation in global marketing strategy and to explain the dynamics of standardization. Design/methodology/approach – This is a conceptual research paper that has been developed based on gaps in prior frameworks of standardization/adaptation. A three-factor model of standardization/adaptation of global marketing strategy was developed. The three factors include homogeneity of customer response to the marketing mix, transferability of competitive advantage, and similarities in the degree of economic freedom. Findings – The model through the use of feedback effects explains the dynamics of standardization. Research limitations/implications – Future research needs to empirically test the model. To enable empirical validation, reliable and valid measures of the three factors proposed in the model need to be developed. Additionally, the model may be used in future research to delineate the impact a variable may have on the ability of a firm to follow a standardized global marketing strategy. Practical implications – The three-factor model aids decisions relating to standardization in a global marketing context. Originality/value – The paper furthers the discussion on the issue of standardization. Through the identification of three factors that impact standardization/adaptation decisions, and the consideration of feedback effects, the paper provides a foundation for future research addressing the issue.",
"title": ""
},
{
"docid": "59a25ae61a22baa8e20ae1a5d88c4499",
"text": "This paper tackles a major privacy threat in current location-based services where users have to report their exact locations to the database server in order to obtain their desired services. For example, a mobile user asking about her nearest restaurant has to report her exact location. With untrusted service providers, reporting private location information may lead to several privacy threats. In this paper, we present a peer-to-peer (P2P)spatial cloaking algorithm in which mobile and stationary users can entertain location-based services without revealing their exact location information. The main idea is that before requesting any location-based service, the mobile user will form a group from her peers via single-hop communication and/or multi-hop routing. Then,the spatial cloaked area is computed as the region that covers the entire group of peers. Two modes of operations are supported within the proposed P2P s patial cloaking algorithm, namely, the on-demand mode and the proactive mode. Experimental results show that the P2P spatial cloaking algorithm operated in the on-demand mode has lower communication cost and better quality of services than the proactive mode, but the on-demand incurs longer response time.",
"title": ""
},
{
"docid": "2bd51149e9899b588ca08688c4ff1db2",
"text": "Buildings are among the largest consumers of electricity in the US. A significant portion of this energy use in buildings can be attributed to HVAC systems used to maintain comfort for occupants. In most cases these building HVAC systems run on fixed schedules and do not employ any fine grained control based on detailed occupancy information. In this paper we present the design and implementation of a presence sensor platform that can be used for accurate occupancy detection at the level of individual offices. Our presence sensor is low-cost, wireless, and incrementally deployable within existing buildings. Using a pilot deployment of our system across ten offices over a two week period we identify significant opportunities for energy savings due to periods of vacancy. Our energy measurements show that our presence node has an estimated battery lifetime of over five years, while detecting occupancy accurately. Furthermore, using a building simulation framework and the occupancy information from our testbed, we show potential energy savings from 10% to 15% using our system.",
"title": ""
},
{
"docid": "8cd8577a70729d03c1561df6a1fcbdbb",
"text": "Quantum computing is a new computational paradigm created by reformulating information and computation in a quantum mechanical framework [30, 27]. Since the laws of physics appear to be quantum mechanical, this is the most relevant framework to consider when considering the fundamental limitations of information processing. Furthermore, in recent decades we have seen a major shift from just observing quantum phenomena to actually controlling quantum mechanical systems. We have seen the communication of quantum information over long distances, the “teleportation” of quantum information, and the encoding and manipulation of quantum information in many different physical media. We still appear to be a long way from the implementation of a large-scale quantum computer, however it is a serious goal of many of the world’s leading physicists, and progress continues at a fast pace. In parallel with the broad and aggressive program to control quantum mechanical systems with increased precision, and to control and interact a larger number of subsystems, researchers have also been aggressively pushing the boundaries of what useful tasks one could perform with quantum mechanical devices. These in-",
"title": ""
},
{
"docid": "5c598998ffcf3d6008e8e5eed94fc396",
"text": "Music information retrieval (MIR) is an emerging research area that receives growing attention from both the research community and music industry. It addresses the problem of querying and retrieving certain types of music from large music data set. Classification is a fundamental problem in MIR. Many tasks in MIR can be naturally cast in a classification setting, such as genre classification, mood classification, artist recognition, instrument recognition, etc. Music annotation, a new research area in MIR that has attracted much attention in recent years, is also a classification problem in the general sense. Due to the importance of music classification in MIR research, rapid development of new methods, and lack of review papers on recent progress of the field, we provide a comprehensive review on audio-based classification in this paper and systematically summarize the state-of-the-art techniques for music classification. Specifically, we have stressed the difference in the features and the types of classifiers used for different classification tasks. This survey emphasizes on recent development of the techniques and discusses several open issues for future research.",
"title": ""
},
{
"docid": "51c42a305039d65dc442910c8078a9aa",
"text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to mathematically formalize these abilities using a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which an agent can move and interact with objects it sees, we propose a “world-model” network that learns to predict the dynamic consequences of the agent’s actions. Simultaneously, we train a separate explicit “self-model” that allows the agent to track the error map of its worldmodel. It then uses the self-model to adversarially challenge the developing world-model. We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering. Moreover, the world-model that the agent learns supports improved performance on object dynamics prediction, detection, localization and recognition tasks. Taken together, our results are initial steps toward creating flexible autonomous agents that self-supervise in realistic physical environments.",
"title": ""
},
{
"docid": "567445f68597ea8ff5e89719772819be",
"text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.",
"title": ""
},
{
"docid": "02c904c320db3a6e0fc9310f077f5d08",
"text": "Rejuvenative procedures of the face are increasing in numbers, and a plethora of different therapeutic options are available today. Every procedure should aim for the patient's safety first and then for natural and long-lasting results. The face is one of the most complex regions in the human body and research continuously reveals new insights into the complex interplay of the different participating structures. Bone, ligaments, muscles, fat, and skin are the key players in the layered arrangement of the face.Aging occurs in all involved facial structures but the onset and the speed of age-related changes differ between each specific structure, between each individual, and between different ethnic groups. Therefore, knowledge of age-related anatomy is crucial for a physician's work when trying to restore a youthful face.This review focuses on the current understanding of the anatomy of the human face and tries to elucidate the morphological changes during aging of bone, ligaments, muscles, and fat, and their role in rejuvenative procedures.",
"title": ""
},
{
"docid": "6470b7d1532012e938063d971f3ead29",
"text": "As society continues to accumulate more and more data, demand for machine learning algorithms that can learn from data with limited human intervention only increases. Semi-supervised learning (SSL) methods, which extend supervised learning algorithms by enabling them to use unlabeled data, play an important role in addressing this challenge. In this thesis, a framework unifying the traditional assumptions and approaches to SSL is defined. A synthesis of SSL literature then places a range of contemporary approaches into this common framework. Our focus is on methods which use generative adversarial networks (GANs) to perform SSL. We analyse in detail one particular GAN-based SSL approach. This is shown to be closely related to two preceding approaches. Through synthetic experiments we provide an intuitive understanding and motivate the formulation of our focus approach. We then theoretically analyse potential alternative formulations of its loss function. This analysis motivates a number of research questions that centre on possible improvements to, and experiments to better understand the focus model. While we find support for our hypotheses, our conclusion more broadly is that the focus method is not especially robust.",
"title": ""
},
{
"docid": "713c7761ecba317bdcac451fcc60e13d",
"text": "We describe a method for automatically transcribing guitar tablatures from audio signals in accordance with the player's proficiency for use as support for a guitar player's practice. The system estimates the multiple pitches in each time frame and the optimal fingering considering playability and player's proficiency. It combines a conventional multipitch estimation method with a basic dynamic programming method. The difficulty of the fingerings can be changed by tuning the parameter representing the relative weights of the acoustical reproducibility and the fingering easiness. Experiments conducted using synthesized guitar audio signals to evaluate the transcribed tablatures in terms of the multipitch estimation accuracy and fingering easiness demonstrated that the system can simplify the fingering with higher precision of multipitch estimation results than the conventional method.",
"title": ""
},
{
"docid": "e7bd07b86b8f1b50641853c06461ce89",
"text": "Purpose – The purpose of this study is to conduct a scientometric analysis of the body of literature contained in 11 major knowledge management and intellectual capital (KM/IC) peer-reviewed journals. Design/methodology/approach – A total of 2,175 articles published in 11 major KM/IC peer-reviewed journals were carefully reviewed and subjected to scientometric data analysis techniques. Findings – A number of research questions pertaining to country, institutional and individual productivity, co-operation patterns, publication frequency, and favourite inquiry methods were proposed and answered. Based on the findings, many implications emerged that improve one’s understanding of the identity of KM/IC as a distinct scientific field. Research limitations/implications – The pool of KM/IC journals examined did not represent all available publication outlets, given that at least 20 peer-reviewed journals exist in the KM/IC field. There are also KM/IC papers published in other non-KM/IC specific journals. However, the 11 journals that were selected for the study have been evaluated by Bontis and Serenko as the top publications in the KM/IC area. Practical implications – Practitioners have played a significant role in developing the KM/IC field. However, their contributions have been decreasing. There is still very much a need for qualitative descriptions and case studies. It is critically important that practitioners consider collaborating with academics for richer research projects. Originality/value – This is the most comprehensive scientometric analysis of the KM/IC field ever conducted.",
"title": ""
},
{
"docid": "1abcf9480879b3d29072f09d5be8609d",
"text": "Warm restart techniques on training deep neural networks often achieve better recognition accuracies and can be regarded as easy methods to obtain multiple neural networks with no additional training cost from a single training process. Ensembles of intermediate neural networks obtained by warm restart techniques can provide higher accuracy than a single neural network obtained finally by a whole training process. However, existing methods on both of warm restart and its ensemble techniques use fixed cyclic schedules and have little degree of parameter adaption. This paper extends a class of possible schedule strategies of warm restart, and clarifies their effectiveness for recognition performance. Specifically, we propose parameterized functions and various cycle schedules to improve recognition accuracies by the use of deep neural networks with no additional training cost. Experiments on CIFAR-10 and CIFAR-100 show that our methods can achieve more accurate rates than the existing cyclic training and ensemble methods.",
"title": ""
},
{
"docid": "1facd226c134b22f62613073deffce60",
"text": "We present two experiments examining the impact of navigation techniques on users' navigation performance and spatial memory in a zoomable user interface (ZUI). The first experiment with 24 participants compared the effect of egocentric body movements with traditional multi-touch navigation. The results indicate a 47% decrease in path lengths and a 34% decrease in task time in favor of egocentric navigation, but no significant effect on users' spatial memory immediately after a navigation task. However, an additional second experiment with 8 participants revealed such a significant increase in performance of long-term spatial memory: The results of a recall task administered after a 15-minute distractor task indicate a significant advantage of 27% for egocentric body movements in spatial memory. Furthermore, a questionnaire about the subjects' workload revealed that the physical demand of the egocentric navigation was significantly higher but there was less mental demand.",
"title": ""
},
{
"docid": "5227121a2feb59fc05775e2623239da9",
"text": "BACKGROUND\nCriminal offenders with a diagnosis of psychopathy or borderline personality disorder (BPD) share an impulsive nature but tend to differ in their style of emotional response. This study aims to use multiple psychophysiologic measures to compare emotional responses to unpleasant and pleasant stimuli.\n\n\nMETHODS\nTwenty-five psychopaths as defined by the Hare Psychopathy Checklist and 18 subjects with BPD from 2 high-security forensic treatment facilities were included in the study along with 24 control subjects. Electrodermal response was used as an indicator of emotional arousal, modulation of the startle reflex as a measure of valence, and electromyographic activity of the corrugator muscle as an index of emotional expression.\n\n\nRESULTS\nCompared with controls, psychopaths were characterized by decreased electrodermal responsiveness, less facial expression, and the absence of affective startle modulation. A higher percentage of psychopaths showed no startle reflex. Subjects with BPD showed a response pattern very similar to that of controls, ie, they showed comparable autonomic arousal, and their startle responses were strongest to unpleasant slides and weakest to pleasant slides. However, corrugator electromyographic activity in subjects with BPD demonstrated little facial modulation when they viewed either pleasant or unpleasant slides.\n\n\nCONCLUSIONS\nThe results support the theory that psychopaths are characterized by a pronounced lack of fear in response to aversive events. Furthermore, the results suggest a general deficit in processing affective information, regardless of whether stimuli are negative or positive. Emotional hyporesponsiveness was specific to psychopaths, since results for offenders with BPD indicate a widely adequate processing of emotional stimuli.",
"title": ""
},
{
"docid": "e777794833a060f99e11675952cd3342",
"text": "In this paper we propose a novel method to utilize the skeletal structure not only for supporting force but for releasing heat by latent heat.",
"title": ""
},
{
"docid": "3b75d996f21af68a0cd4d49ef7d4e10e",
"text": "Observational studies suggest that including men in reproductive health interventions can enhance positive health outcomes. A randomized controlled trial was designed to test the impact of involving male partners in antenatal health education on maternal health care utilization and birth preparedness in urban Nepal. In total, 442 women seeking antenatal services during second trimester of pregnancy were randomized into three groups: women who received education with their husbands, women who received education alone and women who received no education. The education intervention consisted of two 35-min health education sessions. Women were followed until after delivery. Women who received education with husbands were more likely to attend a post-partum visit than women who received education alone [RR = 1.25, 95% CI = (1.01, 1.54)] or no education [RR = 1.29, 95% CI = (1.04, 1.60)]. Women who received education with their husbands were also nearly twice as likely as control group women to report making >3 birth preparations [RR = 1.99, 95% CI = (1.10, 3.59)]. Study groups were similar with respect to attending the recommended number of antenatal care checkups, delivering in a health institution or having a skilled provider at birth. These data provide evidence that educating pregnant women and their male partners yields a greater net impact on maternal health behaviors compared with educating women alone.",
"title": ""
},
{
"docid": "830b5591bd98199936a5ea10ff2b058b",
"text": "To stand up for the brands they support, members of brand communities develop “oppositional brand loyalty” towards other rival brands. This study identifies how the interaction characteristics of brand community affect the perceived benefits of community members, and whether the perceived benefits cause members to develop community commitment, as well as the relationship between community commitment and oppositional brand loyalty. This study examined members of online automobile communities in Taiwan, and obtained a total of 283 valid samples. The analytical results reveal that interaction characteristics of brand community make members perceive many benefits, with “brand community engagement” being the most noticeable. Furthermore, hedonic, social, and learning benefits are the main factors to form community commitments. When members have community commitments, they will form oppositional brand loyalty to other rival brands. Based on the analytical results, this study provides suggestions to enterprises regarding online brand community operations. © 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | scidocsrr |
e93b72169f7986f4af221e81bd250504 | Combining Convolutional and Recurrent Neural Networks for Human Skin Detection | [
{
"docid": "37637ca24397aba35e1e4926f1a94c91",
"text": "We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally and vertically in both directions, encoding patches or activations, and providing relevant global information. Moreover, ReNet layers are stacked on top of pre-trained convolutional layers, benefiting from generic local features. Upsampling layers follow ReNet layers to recover the original image resolution in the final predictions. The proposed ReSeg architecture is efficient, flexible and suitable for a variety of semantic segmentation tasks. We evaluate ReSeg on several widely-used semantic segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid, achieving stateof-the-art performance. Results show that ReSeg can act as a suitable architecture for semantic segmentation tasks, and may have further applications in other structured prediction problems. The source code and model hyperparameters are available on https://github.com/fvisin/reseg.",
"title": ""
}
] | [
{
"docid": "0629fbef788719deb0c97e411a60b3a3",
"text": "An experimental study was conducted to investigate the flow behavior around a corrugated dragonfly airfoil compared with a traditional, streamlined airfoil and a flat plate. The experimental study was conducted at the chord Reynolds number of ReC =34,000, i.e., the regime where Micro-Air-Vehicles (MAV) usually operate, to explore the potential applications of such bio-inspired airfoils for MAV designs. The measurement results demonstrated clearly that the corrugated dragonfly airfoil has much better performance over the streamlined airfoil and the flat plate in preventing large-scale flow separation and airfoil stall at the test low Reynolds number level. The detailed PIV measurements near the noses of the airfoils elucidated underlying physics about why the corrugated dragonfly airfoil could suppress flow separation and airfoil stall at low Reynolds numbers: Instead of having laminar separation, the protruding corners of the corrugated dragonfly airfoil were found to be acting as “turbulators” to generate unsteady vortices to promote the transition of the boundary layer from laminar to turbulent rapidly. The unsteady vortex structures trapped in the valleys of the corrugated cross section could pump high-speed fluid from outside to near wall regions to provide sufficient energy for the boundary layer to overcome the adverse pressure gradient, thus, discourage flow separations and airfoil stall.",
"title": ""
},
{
"docid": "514d626cc44cf453706c0903cbc645fe",
"text": "Peer group analysis is a new tool for monitoring behavior over time in data mining situations. In particular, the tool detects individual objects that begin to behave in a way distinct from objects to which they had previously been similar. Each object is selected as a target object and is compared with all other objects in the database, using either external comparison criteria or internal criteria summarizing earlier behavior patterns of each object. Based on this comparison, a peer group of objects most similar to the target object is chosen. The behavior of the peer group is then summarized at each subsequent time point, and the behavior of the target object compared with the summary of its peer group. Those target objects exhibiting behavior most different from their peer group summary behavior are flagged as meriting closer investigation. The tool is intended to be part of the data mining process, involving cycling between the detection of objects that behave in anomalous ways and the detailed examination of those objects. Several aspects of peer group analysis can be tuned to the particular application, including the size of the peer group, the width of the moving behavior window being used, the way the peer group is summarized, and the measures of difference between the target object and its peer group summary. We apply the tool in various situations and illustrate its use on a set of credit card transaction data.",
"title": ""
},
{
"docid": "ac979967ab992da6115852e00e4769f2",
"text": "Experiments were carried out to study the effect of high dose of “tulsi” (Ocimum sanctum Linn.) pellets on testis and epididymis in male albino rat. Wheat flour, oil and honey pellets of tulsi leaves were fed to albino rat, at 400mg/ 100g body weight per day, along with normal diet, for a period of 72 days. One group of tulsi-fed rats was left for recovery, after the last dose fed on day 72, up to day 120. This high dose of tulsi was found to cause durationdependant decrease of testis weight and derangements in the histo-architecture of testis as well as epididymis. The diameter of seminiferous tubules decreased considerably, with corresponding increase in the interstitium. Spermatogenesis was arrested, accompanied by degeneration of seminiferous epithelial elements. Epididymal tubules regressed, and the luminal spermatozoa formed a coagulum. In the recovery group, testis and epididymis regained normal weights, where as spermatogenesis was partially restored. Thus, high dose of tulsi leaf affects testicular and epididymyal structure and function reversibly.",
"title": ""
},
{
"docid": "8f25b3b36031653311eee40c6c093768",
"text": "This paper provides a survey of the applications of computers in music teaching. The systems are classified by musical activity rather than by technical approach. The instructional strategies involved and the type of knowledge represented are highlighted and areas for future research are identified.",
"title": ""
},
{
"docid": "9846794c512f847ca16c43bcf055a757",
"text": "Sensing and presenting on-road information of moving vehicles is essential for fully and semi-automated driving. It is challenging to track vehicles from affordable on-board cameras in crowded scenes. The mismatch or missing data are unavoidable and it is ineffective to directly present uncertain cues to support the decision-making. In this paper, we propose a physical model based on incompressible fluid dynamics to represent the vehicle’s motion, which provides hints of possible collision as a continuous scalar riskmap. We estimate the position and velocity of other vehicles from a monocular on-board camera located in front of the ego-vehicle. The noisy trajectories are then modeled as the boundary conditions in the simulation of advection and diffusion process. We then interactively display the animating distribution of substances, and show that the continuous scalar riskmap well matches the perception of vehicles even in presence of the tracking failures. We test our method on real-world scenes and discuss about its application for driving assistance and autonomous vehicle in the future.",
"title": ""
},
{
"docid": "49bd1cdbeea10f39a2b34cfa5baac0ef",
"text": "Recently, image inpainting task has revived with the help of deep learning techniques. Deep neural networks, especially the generative adversarial networks~(GANs) make it possible to recover the missing details in images. Due to the lack of sufficient context information, most existing methods fail to get satisfactory inpainting results. This work investigates a more challenging problem, e.g., the newly-emerging semantic image inpainting - a task to fill in large holes in natural images. In this paper, we propose an end-to-end framework named progressive generative networks~(PGN), which regards the semantic image inpainting task as a curriculum learning problem. Specifically, we divide the hole filling process into several different phases and each phase aims to finish a course of the entire curriculum. After that, an LSTM framework is used to string all the phases together. By introducing this learning strategy, our approach is able to progressively shrink the large corrupted regions in natural images and yields promising inpainting results. Moreover, the proposed approach is quite fast to evaluate as the entire hole filling is performed in a single forward pass. Extensive experiments on Paris Street View and ImageNet dataset clearly demonstrate the superiority of our approach. Code for our models is available at https://github.com/crashmoon/Progressive-Generative-Networks.",
"title": ""
},
{
"docid": "1071d0c189f9220ba59acfca06c5addb",
"text": "A 1.6 Gb/s receiver for optical communication has been designed and fabricated in a 0.25-/spl mu/m CMOS process. This receiver has no transimpedance amplifier and uses the parasitic capacitor of the flip-chip bonded photodetector as an integrating element and resolves the data with a double-sampling technique. A simple feedback loop adjusts a bias current to the average optical signal, which essentially \"AC couples\" the input. The resulting receiver resolves an 11 /spl mu/A input, dissipates 3 mW of power, occupies 80 /spl mu/m/spl times/50 /spl mu/m of area and operates at over 1.6 Gb/s.",
"title": ""
},
{
"docid": "54d45486af755311fc5394c4b628be2e",
"text": "Loop closure detection is essential and important in visual simultaneous localization and mapping (SLAM) systems. Most existing methods typically utilize a separate feature extraction part and a similarity metric part. Compared to these methods, an end-to-end network is proposed in this paper to jointly optimize the two parts in a unified framework for further enhancing the interworking between these two parts. First, a two-branch siamese network is designed to learn respective features for each scene of an image pair. Then a hierarchical weighted distance (HWD) layer is proposed to fuse the multi-scale features of each convolutional module and calculate the distance between the image pair. Finally, by using the contrastive loss in the training process, the effective feature representation and similarity metric can be learned simultaneously. Experiments on several open datasets illustrate the superior performance of our approach and demonstrate that the end-to-end network is feasible to conduct the loop closure detection in real time and provides an implementable method for visual SLAM systems.",
"title": ""
},
{
"docid": "a299b0f58aaba6efff9361ff2b5a1e69",
"text": "The continuing growth of World Wide Web and on-line text collections makes a large volume of information available to users. Automatic text summarization allows users to quickly understand documents. In this paper, we propose an automated technique for single document summarization which combines content-based and graph-based approaches and introduce the Hopfield network algorithm as a technique for ranking text segments. A series of experiments are performed using the DUC collection and a Thai-document collection. The results show the superiority of the proposed technique over reference systems, in addition the Hopfield network algorithm on undirected graph is shown to be the best text segment ranking algorithm in the study",
"title": ""
},
{
"docid": "1251dd7b6b2bfa3778dcdeece4694988",
"text": "Container technology provides a lightweight operating system level virtual hosting environment. Its emergence profoundly changes the development and deployment paradigms of multi-tier distributed applications. However, due to the incomplete implementation of system resource isolation mechanisms in the Linux kernel, some security concerns still exist for multiple containers sharing an operating system kernel on a multi-tenancy container cloud service. In this paper, we first present the information leakage channels we discovered that are accessible within the containers. Such channels expose a spectrum of system-wide host information to the containers without proper resource partitioning. By exploiting such leaked host information, it becomes much easier for malicious adversaries (acting as tenants in the container clouds) to launch advanced attacks that might impact the reliability of cloud services. Additionally, we discuss the root causes of the containers' information leakages and propose a two-stage defense approach. As demonstrated in the evaluation, our solution is effective and incurs trivial performance overhead.",
"title": ""
},
{
"docid": "4929e1f954519f0976ec54e9ed8c2c37",
"text": "Software support for making effective pen-based applications is currently rudimentary. To facilitate the creation of such applications, we have developed SATIN, a Java-based toolkit designed to support the creation of applications that leverage the informal nature of pens. This support includes a scenegraph for manipulating and rendering objects; support for zooming and rotating objects, switching between multiple views of an object, integration of pen input with interpreters, libraries for manipulating ink strokes, widgets optimized for pens, and compatibility with Java's Swing toolkit. SATIN includes a generalized architecture for handling pen input, consisting of recognizers, interpreters, and multi-interpreters. In this paper, we describe the functionality and architecture of SATIN, using two applications built with SATIN as examples.",
"title": ""
},
{
"docid": "dd8c61b00519117ec153b3938f4c6e69",
"text": "The characteristics of athletic shoes have been described with terms like cushioning, stability, and guidance.1,2 Despite many years of effort to optimize athletic shoe construction, the prevalence of running-related lower extremity injuries has not significantly declined; however, athletic performance has reached new heights.3-5 Criteria for optimal athletic shoe construction have been proposed, but no clear consensus has emerged.6-8 Given the unique demands of various sports, sportspecific shoe designs may simultaneously increase performance and decrease injury incidence.9-11 The purpose of this report is to provide an overview of current concepts in athletic shoe design, with emphasis on running shoes, so that athletic trainers and therapists (ATs) can assist their patients in selection of an appropriate shoe design.",
"title": ""
},
{
"docid": "fc9699b4382b1ddc6f60fc6ec883a6d3",
"text": "Applications hosted in today's data centers suffer from internal fragmentation of resources, rigidity, and bandwidth constraints imposed by the architecture of the network connecting the data center's servers. Conventional architectures statically map web services to Ethernet VLANs, each constrained in size to a few hundred servers owing to control plane overheads. The IP routers used to span traffic across VLANs and the load balancers used to spray requests within a VLAN across servers are realized via expensive customized hardware and proprietary software. Bisection bandwidth is low, severly constraining distributed computation Further, the conventional architecture concentrates traffic in a few pieces of hardware that must be frequently upgraded and replaced to keep pace with demand - an approach that directly contradicts the prevailing philosophy in the rest of the data center, which is to scale out (adding more cheap components) rather than scale up (adding more power and complexity to a small number of expensive components).\n Commodity switching hardware is now becoming available with programmable control interfaces and with very high port speeds at very low port cost, making this the right time to redesign the data center networking infrastructure. In this paper, we describe monsoon, a new network architecture, which scales and commoditizes data center networking monsoon realizes a simple mesh-like architecture using programmable commodity layer-2 switches and servers. In order to scale to 100,000 servers or more,monsoon makes modifications to the control plane (e.g., source routing) and to the data plane (e.g., hot-spot free multipath routing via Valiant Load Balancing). It disaggregates the function of load balancing into a group of regular servers, with the result that load balancing server hardware can be distributed amongst racks in the data center leading to greater agility and less fragmentation. The architecture creates a huge, flexible switching domain, supporting any server/any service and unfragmented server capacity at low cost.",
"title": ""
},
{
"docid": "f8854602bbb2f5295a5fba82f22ca627",
"text": "Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favourable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or ‘duet’ performs significantly better than either neural network individually on a Web page ranking task, and significantly outperforms traditional baselines and other recently proposed models based on neural networks.",
"title": ""
},
{
"docid": "5cf92beeeb4e1f3e36a8ff1fd639d40d",
"text": "Mobile application spoofing is an attack where a malicious mobile app mimics the visual appearance of another one. A common example of mobile application spoofing is a phishing attack where the adversary tricks the user into revealing her password to a malicious app that resembles the legitimate one. In this paper, we propose a novel spoofing detection approach, tailored to the protection of mobile app login screens, using screenshot extraction and visual similarity comparison. We use deception rate as a novel similarity metric for measuring how likely the user is to consider a potential spoofing app as one of the protected applications. We conducted a large-scale online study where participants evaluated spoofing samples of popular mobile app login screens, and used the study results to implement a detection system that accurately estimates deception rate. We show that efficient detection is possible with low overhead.",
"title": ""
},
{
"docid": "fe42cf28ff020c35d3a3013bb249c7d8",
"text": "Sensors and actuators are the core components of all mechatronic systems used in a broad range of diverse applications. A relatively new and rapidly evolving area is the one of rehabilitation and assistive devices that comes to support and improve the quality of human life. Novel exoskeletons have to address many functional and cost-sensitive issues such as safety, adaptability, customization, modularity, scalability, and maintenance. Therefore, a smart variable stiffness actuator was developed. The described approach was to integrate in one modular unit a compliant actuator with all sensors and electronics required for real-time communications and control. This paper also introduces a new method to estimate and control the actuator's torques without using dedicated expensive torque sensors in conditions where the actuator's torsional stiffness can be adjusted by the user. A 6-degrees-of-freedom exoskeleton was assembled and tested using the technology described in this paper, and is introduced as a real-life case study for the mechatronic design, modularity, and integration of the proposed smart actuators, suitable for human–robot interaction. The advantages are discussed together with possible improvements and the possibility of extending the presented technology to other areas of mechatronics.",
"title": ""
},
{
"docid": "4408d5fa31a64d54fbe4b4d70b18182b",
"text": "Using microarray analysis, this study showed up-regulation of toll-like receptors 1, 2, 4, 7, 8, NF-κB, TNF, p38-MAPK, and MHC molecules in human peripheral blood mononuclear cells following infection with Plasmodium falciparum. This analysis reports herein further studies based on time-course microarray analysis with focus on malaria-induced host immune response. The results show that in early malaria, selected immune response-related genes were up-regulated including α β and γ interferon-related genes, as well as genes of IL-15, CD36, chemokines (CXCL10, CCL2, S100A8/9, CXCL9, and CXCL11), TRAIL and IgG Fc receptors. During acute febrile malaria, up-regulated genes included α β and γ interferon-related genes, IL-8, IL-1b IL-10 downstream genes, TGFB1, oncostatin-M, chemokines, IgG Fc receptors, ADCC signalling, complement-related genes, granzymes, NK cell killer/inhibitory receptors and Fas antigen. During recovery, genes for NK receptorsand granzymes/perforin were up-regulated. When viewed in terms of immune response type, malaria infection appeared to induce a mixed TH1 response, in which α and β interferon-driven responses appear to predominate over the more classic IL-12 driven pathway. In addition, TH17 pathway also appears to play a significant role in the immune response to P. falciparum. Gene markers of TH17 (neutrophil-related genes, TGFB1 and IL-6 family (oncostatin-M)) and THαβ (IFN-γ and NK cytotoxicity and ADCC gene) immune response were up-regulated. Initiation of THαβ immune response was associated with an IFN-αβ response, which ultimately resulted in moderate-mild IFN-γ achieved via a pathway different from the more classic IL-12 TH1 pattern. Based on these observations, this study speculates that in P. falciparum infection, THαβ/TH17 immune response may predominate over ideal TH1 response.",
"title": ""
},
{
"docid": "531a7417bd66ff0fdd7fb35c7d6d8559",
"text": "G. R. White University of Sussex, Brighton, UK Abstract In order to design new methodologies for evaluating the user experience of video games, it is imperative to initially understand two core issues. Firstly, how are video games developed at present, including components such as processes, timescales and staff roles, and secondly, how do studios design and evaluate the user experience. This chapter will discuss the video game development process and the practices that studios currently use to achieve the best possible user experience. It will present four case studies from game developers Disney Interactive (Black Rock Studio), Relentless, Zoe Mode, and HandCircus, each detailing their game development process and also how this integrates with the user experience evaluation. The case studies focus on different game genres, platforms, and target user groups, ensuring that this chapter represents a balanced view of current practices in evaluating user experience during the game development process.",
"title": ""
},
{
"docid": "69fd03b01ba24925cd92e3dd4be8ff4f",
"text": "There have been many proposals for first-order belief networks but these typically only let us reason about the individuals that we know about. There are many instances where we have to quantify over all of the individuals in a population. When we do this the population size often matters and we need to reason about all of the members of the population (but not necessarily individually). This paper presents an algorithm to reason about multiple individuals, where we may know particular facts about some of them, but want to treat the others as a group. Combining unification with variable elimination lets us reason about classes of individuals without needed to ground out the theory.",
"title": ""
}
] | scidocsrr |
f64b0e6c0e0bb7b264772bd594817e45 | Cluster-based sampling of multiclass imbalanced data | [
{
"docid": "f6f6f322118f5240aec5315f183a76ab",
"text": "Learning from data sets that contain very few instances of the minority class usually produces biased classifiers that have a higher predictive accuracy over the majority class, but poorer predictive accuracy over the minority class. SMOTE (Synthetic Minority Over-sampling Technique) is specifically designed for learning from imbalanced data sets. This paper presents a modified approach (MSMOTE) for learning from imbalanced data sets, based on the SMOTE algorithm. MSMOTE not only considers the distribution of minority class samples, but also eliminates noise samples by adaptive mediation. The combination of MSMOTE and AdaBoost are applied to several highly and moderately imbalanced data sets. The experimental results show that the prediction performance of MSMOTE is better than SMOTEBoost in the minority class and F-values are also improved.",
"title": ""
}
] | [
{
"docid": "18f9fff4bd06f28cd39c97ff40467d0f",
"text": "Smart agriculture is an emerging concept, because IOT sensors are capable of providing information about agriculture fields and then act upon based on the user input. In this Paper, it is proposed to develop a Smart agriculture System that uses advantages of cutting edge technologies such as Arduino, IOT and Wireless Sensor Network. The paper aims at making use of evolving technology i.e. IOT and smart agriculture using automation. Monitoring environmental conditions is the major factor to improve yield of the efficient crops. The feature of this paper includes development of a system which can monitor temperature, humidity, moisture and even the movement of animals which may destroy the crops in agricultural field through sensors using Arduino board and in case of any discrepancy send a SMS notification as well as a notification on the application developed for the same to the farmer’s smartphone using Wi-Fi/3G/4G. The system has a duplex communication link based on a cellularInternet interface that allows for data inspection and irrigation scheduling to be programmed through an android application. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.",
"title": ""
},
{
"docid": "1c5a717591aa049303af7239ff203ebb",
"text": "Indian Biotech opponents have attributed the increase of suicides to the monopolization of GM seeds, centering on patent control, application of terminator technology, marketing strategy, and increased production costs. The contentions of the biotech opponents, however, have been criticized for a lack of transparency in their modus operandi i.e. the use of methodology in their argumentation. The fact is, however, that with the intention of getting the attention of those capable of determining the future of GM cotton in India, opponents resorted to generating controversies. Therefore, this article will review and evaluate the multifaceted contentions of both opponents and defenders. Although the association between seed monopolization and farmer-suicide is debatable, we will show that there is a link between the economic factors associated with Bt. cultivation and farmer suicide. The underlying thesis of biotech opponents becomes all the more significant when analysed vis-à-vis the contention of the globalization critics that there has been a political and economic marginalization of the Indian farmers. Their accusation assumes significance in the context of a fragile democracy like India where market forces are accorded precedence over farmers' needs until election time.",
"title": ""
},
{
"docid": "ca8da405a67d3b8a30337bc23dfce0cc",
"text": "Object detection is one of the most important tasks of computer vision. It is usually performed by evaluating a subset of the possible locations of an image, that are more likely to contain the object of interest. Exhaustive approaches have now been superseded by object proposal methods. The interplay of detectors and proposal algorithms has not been fully analyzed and exploited up to now, although this is a very relevant problem for object detection in video sequences. We propose to connect, in a closed-loop, detectors and object proposal generator functions exploiting the ordered and continuous nature of video sequences. Different from tracking we only require a previous frame to improve both proposal and detection: no prediction based on local motion is performed, thus avoiding tracking errors. We obtain three to four points of improvement in mAP and a detection time that is lower than Faster Regions with CNN features (R-CNN), which is the fastest Convolutional Neural Network (CNN) based generic object detector known at the moment.",
"title": ""
},
{
"docid": "ad02d315182c1b6181c6dda59185142c",
"text": "Fact checking is an essential part of any investigative work. For linguistic, psychological and social reasons, it is an inherently human task. Yet, modern media make it increasingly difficult for experts to keep up with the pace at which information is produced. Hence, we believe there is value in tools to assist them in this process. Much of the effort on Web data research has been focused on coping with incompleteness and uncertainty. Comparatively, dealing with context has received less attention, although it is crucial in judging the validity of a claim. For instance, what holds true in a US state, might not in its neighbors, e.g., due to obsolete or superseded laws. In this work, we address the problem of checking the validity of claims in multiple contexts. We define a language to represent and query facts across different dimensions. The approach is non-intrusive and allows relatively easy modeling, while capturing incompleteness and uncertainty. We describe the syntax and semantics of the language. We present algorithms to demonstrate its feasibility, and we illustrate its usefulness through examples.",
"title": ""
},
{
"docid": "9b9a04a859b51866930b3fb4d93653b6",
"text": "BACKGROUND\nResults of several studies have suggested a probable etiologic association between Epstein-Barr virus (EBV) and leukemias; therefore, the aim of this study was to investigate the association of EBV in childhood leukemia.\n\n\nMETHODS\nA direct isothermal amplification method was developed for detection of the latent membrane protein 1 (LMP1) of EBV in the peripheral blood of 80 patients with leukemia (54 had lymphoid leukemia and 26 had myeloid leukemia) and of 20 hematologically healthy control subjects.\n\n\nRESULTS\nEBV LMP1 gene transcripts were found in 29 (36.3%) of the 80 patients with leukemia but in none of the healthy controls (P < .0001). Of the 29 EBV(+) cases, 23 (79.3%), 5 (17.3%), and 1 (3.4%) were acute lymphoblastic leukemia, acute myeloid leukemia, and chronic myeloid leukemia, respectively.\n\n\nCONCLUSION\nEBV LMP1 gene transcriptional activity was observed in a significant proportion of patients with acute lymphoblastic leukemia. EBV infection in patients with lymphoid leukemia may be a factor involved in the high incidence of pediatric leukemia in the Sudan.",
"title": ""
},
{
"docid": "6c1317ef88110756467a10c4502851bb",
"text": "Deciding query equivalence is an important problem in data management with many practical applications. Solving the problem, however, is not an easy task. While there has been a lot of work done in the database research community in reasoning about the semantic equivalence of SQL queries, prior work mainly focuses on theoretical limitations. In this paper, we present COSETTE, a fully automated prover that can determine the equivalence of SQL queries. COSETTE leverages recent advances in both automated constraint solving and interactive theorem proving, and returns a counterexample (in terms of input relations) if two queries are not equivalent, or a proof of equivalence otherwise. Although the problem of determining equivalence for arbitrary SQL queries is undecidable, our experiments show that COSETTE can determine the equivalences of a wide range of queries that arise in practice, including conjunctive queries, correlated queries, queries with outer joins, and queries with aggregates. Using COSETTE, we have also proved the validity of magic set rewrites, and confirmed various real-world query rewrite errors, including the famous COUNT bug. We are unaware of any prior tool that can automatically determine the equivalences of a broad range of queries as COSETTE, and believe that our tool represents a major step towards building provably-correct query optimizers for real-world database systems.",
"title": ""
},
{
"docid": "d603806f579a937a24ad996543fe9093",
"text": "Early vision relies heavily on rectangular windows for tasks such as smoothing and computing correspondence. While rectangular windows are efficient, they yield poor results near object boundaries. We describe an efficient method for choosing an arbitrarily shaped connected window, in a manner which varies at each pixel. Our approach can be applied to many problems, including image restoration and visual correspondence. It runs in linear time, and takes a few seconds on traditional benchmark images. Performance on both synthetic and real imagery with ground truth appears promising.",
"title": ""
},
{
"docid": "67070d149bcee51cc93a81f21f15ad71",
"text": "As an important and fundamental tool for analyzing the schedulability of a real-time task set on the multiprocessor platform, response time analysis (RTA) has been researched for several years on both Global Fixed Priority (G-FP) and Global Earliest Deadline First (G-EDF) scheduling. This paper proposes a new analysis that improves over current state-of-the-art RTA methods for both G-FP and G-EDF scheduling, by reducing their pessimism. The key observation is that when estimating the carry-in workload, all the existing RTA techniques depend on the worst case scenario in which the carry-in job should execute as late as possible and just finishes execution before its worst case response time (WCRT). But the carry-in workload calculated under this assumption may be over-estimated, and thus the accuracy of the response time analysis may be impacted. To address this problem, we first propose a new method to estimate the carry-in workload more precisely. The proposed method does not depend on any specific scheduling algorithm and can be used for both G-FP and G-EDF scheduling. We then propose a general RTA algorithm that can improve most existing RTA tests by incorporating our carry-in estimation method. To further improve the execution efficiency, we also introduce an optimization technique for our RTA tests. Experiments with randomly generated task sets are conducted and the results show that, compared with the state-of-the-art technologies, the proposed tests exhibit considerable performance improvements, up to 9 and 7.8 percent under G-FP and G-EDF scheduling respectively, in terms of schedulability test precision.",
"title": ""
},
{
"docid": "90f188c1f021c16ad7c8515f1244c08a",
"text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.",
"title": ""
},
{
"docid": "609110c4bf31885d99618994306ef2cc",
"text": "This study examined the ability of a collagen solution to aid revascularization of necrotic-infected root canals in immature dog teeth. Sixty immature teeth from 6 dogs were infected, disinfected, and randomized into experimental groups: 1: no further treatment; 2: blood in canal; 3: collagen solution in canal, 4: collagen solution + blood, and 5: negative controls (left for natural development). Uncorrected chi-square analysis of radiographic results showed no statistical differences (p >or= 0.05) between experimental groups regarding healing of radiolucencies but a borderline statistical difference (p = 0.058) for group 1 versus group 4 for radicular thickening. Group 2 showed significantly more apical closure than group 1 (p = 0.03) and a borderline statistical difference (p = 0.051) for group 3 versus group 1. Uncorrected chi-square analysis revealed that there were no statistical differences between experimental groups for histological results. However, some roots in each of groups 1 to 4 (previously infected) showed positive histologic outcomes (thickened walls in 43.9%, apical closure in 54.9%, and new luminal tissue in 29.3%). Revascularization of disinfected immature dog root canal systems is possible.",
"title": ""
},
{
"docid": "eb0ef9876f37b5974ed27079bcda8e03",
"text": "Increasing number of individuals are using the internet to meet their health information needs; however, little is known about the characteristics of online health information seekers and whether they differ from individuals who search for health information from offline sources. Researchers must examine the primary characteristics of online and offline health information seekers in order to better recognize their needs, highlight improvements that may be made in the arena of internet health information quality and availability, and understand factors that discriminate between those who seek online vs. offline health information. This study examines factors that differentiate between online and offline health information seekers in the United States. Data for this study are from a subsample (n = 385) of individuals from the 2000 General Social Survey. The subsample includes those respondents who were asked Internet and health seeking module questions. Similar to prior research, results of this study show that the majority of both online and offline health information seekers report reliance upon health care professionals as a source of health information. This study is unique in that the results illustrate that there are several key factors (age, income, and education) that discriminate between US online and offline health information seekers; this suggests that general \"digital divide\" characteristics influence where health information is sought. In addition to traditional digital divide factors, those who are healthier and happier are less likely to look exclusively offline for health information. Implications of these findings are discussed in terms of the digital divide and the patient-provider relationship.",
"title": ""
},
{
"docid": "a35bdf118e84d71b161fea1b9e798a1a",
"text": "Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper.",
"title": ""
},
{
"docid": "022f0b83e93b82dfbdf7ae5f5ebe6f8f",
"text": "Most pregnant women at risk of for infection with Plasmodium vivax live in the Asia-Pacific region. However, malaria in pregnancy is not recognised as a priority by many governments, policy makers, and donors in this region. Robust data for the true burden of malaria throughout pregnancy are scarce. Nevertheless, when women have little immunity, each infection is potentially fatal to the mother, fetus, or both. WHO recommendations for the control of malaria in pregnancy are largely based on the situation in Africa, but strategies in the Asia-Pacific region are complicated by heterogeneous transmission settings, coexistence of multidrug-resistant Plasmodium falciparum and Plasmodium vivax parasites, and different vectors. Most knowledge of the epidemiology, effect, treatment, and prevention of malaria in pregnancy in the Asia-Pacific region comes from India, Papua New Guinea, and Thailand. Improved estimates of the morbidity and mortality of malaria in pregnancy are urgently needed. When malaria in pregnancy cannot be prevented, accurate diagnosis and prompt treatment are needed to avert dangerous symptomatic disease and to reduce effects on fetuses.",
"title": ""
},
{
"docid": "62218093e4d3bf81b23512043fc7a013",
"text": "The Internet of things (IoT) refers to every object, which is connected over a network with the ability to transfer data. Users perceive this interaction and connection as useful in their daily life. However any improperly designed and configured technology will exposed to security threats. Therefore an ecosystem for IoT should be designed with security embedded in each layer of its ecosystem. This paper will discussed the security threats to IoT and then proposed an IoT Security Framework to mitigate it. Then IoT Security Framework will be used to develop a Secure IoT Sensor to Cloud Ecosystem.",
"title": ""
},
{
"docid": "ba0051fdc72efa78a7104587042cea64",
"text": "Open innovation breaks the original innovation border of organization and emphasizes the use of suppliers, customers, partners, and other internal and external innovative thinking and resources. How to effectively implement and manage open innovation has become a new business problem. Business ecosystem is the network system of value creation and co-evolution achieved by suppliers, users, partner, and other groups with self-organization mode. This study began with the risk analysis of open innovation implementation; then innovation process was embedded into business ecosystem structure; open innovation mode based on business ecosystem was proposed; business ecosystem based on open innovation was built according to influence degree of each innovative object. Study finds that both sides have a mutual promotion relationship, which provides a new analysis perspective for open innovation and business ecosystem; at the same time, it is also conducive to guiding the concrete practice of implementing open innovation.",
"title": ""
},
{
"docid": "f10d79d1eb6d3ec994c1ec7ec3769437",
"text": "The security of embedded devices often relies on the secrecy of proprietary cryptographic algorithms. These algorithms and their weaknesses are frequently disclosed through reverse-engineering software, but it is commonly thought to be too expensive to reconstruct designs from a hardware implementation alone. This paper challenges that belief by presenting an approach to reverse-engineering a cipher from a silicon implementation. Using this mostly automated approach, we reveal a cipher from an RFID tag that is not known to have a software or micro-code implementation. We reconstruct the cipher from the widely used Mifare Classic RFID tag by using a combination of image analysis of circuits and protocol analysis. Our analysis reveals that the security of the tag is even below the level that its 48-bit key length suggests due to a number of design flaws. Weak random numbers and a weakness in the authentication protocol allow for pre-computed rainbow tables to be used to find any key in a matter of seconds. Our approach of deducing functionality from circuit images is mostly automated, hence it is also feasible for large chips. The assumption that algorithms can be kept secret should therefore to be avoided for any type of silicon chip. Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. ([A cipher] must not depend on secrecy, and it must not matter if it falls into enemy hands.) August Kerckhoffs, La Cryptographie Militaire, January 1883 [13]",
"title": ""
},
{
"docid": "8410b8b76ab690ed4389efae15608d13",
"text": "The most natural way to speed-up the training of large networks is to use dataparallelism on multiple GPUs. To scale Stochastic Gradient (SG) based methods to more processors, one need to increase the batch size to make full use of the computational power of each GPU. However, keeping the accuracy of network with increase of batch size is not trivial. Currently, the state-of-the art method is to increase Learning Rate (LR) proportional to the batch size, and use special learning rate with \"warm-up\" policy to overcome initial optimization difficulty. By controlling the LR during the training process, one can efficiently use largebatch in ImageNet training. For example, Batch-1024 for AlexNet and Batch-8192 for ResNet-50 are successful applications. However, for ImageNet-1k training, state-of-the-art AlexNet only scales the batch size to 1024 and ResNet50 only scales it to 8192. The reason is that we can not scale the learning rate to a large value. To enable large-batch training to general networks or datasets, we propose Layer-wise Adaptive Rate Scaling (LARS). LARS LR uses different LRs for different layers based on the norm of the weights (||w||) and the norm of the gradients (||∇w||). By using LARS algoirithm, we can scale the batch size to 32768 for ResNet50 and 8192 for AlexNet. Large batch can make full use of the system’s computational power. For example, batch-4096 can achieve 3× speedup over batch-512 for ImageNet training by AlexNet model on a DGX-1 station (8 P100 GPUs).",
"title": ""
},
{
"docid": "bde5a1876e93f10ad5942c416063bef6",
"text": "This paper describes an innovative agent-based architecture for mixed-initiative interaction between a human and a robot that interacts via a graphical user interface (GUI). Mixed-initiative interaction typically refers to a flexible interaction strategy between a human and a computer to contribute what is best-suited at the most appropriate time [1]. In this paper, we extend this concept to human-robot interaction (HRI). When compared to pure humancomputer interaction, HRIs encounter additional difficulty, as the user must assess the situation at the robot’s remote location via limited sensory feedback. We propose an agent-based adaptive human-robot interface for mixed-initiative interaction to address this challenge. The proposed adaptive user interface (UI) architecture provides a platform for developing various agents that control robots and user interface components (UICs). Such components permit the human and the robot to communicate missionrelevant information.",
"title": ""
},
{
"docid": "2b2c30fa2dc19ef7c16cf951a3805242",
"text": "A standard approach to estimating online click-based metrics of a ranking function is to run it in a controlled experiment on live users. While reliable and popular in practice, configuring and running an online experiment is cumbersome and time-intensive. In this work, inspired by recent successes of offline evaluation techniques for recommender systems, we study an alternative that uses historical search log to reliably predict online click-based metrics of a \\emph{new} ranking function, without actually running it on live users. To tackle novel challenges encountered in Web search, variations of the basic techniques are proposed. The first is to take advantage of diversified behavior of a search engine over a long period of time to simulate randomized data collection, so that our approach can be used at very low cost. The second is to replace exact matching (of recommended items in previous work) by \\emph{fuzzy} matching (of search result pages) to increase data efficiency, via a better trade-off of bias and variance. Extensive experimental results based on large-scale real search data from a major commercial search engine in the US market demonstrate our approach is promising and has potential for wide use in Web search.",
"title": ""
}
] | scidocsrr |
0f6c92d1fd23fab6d2ef7e67ef22a415 | Evaluating the Usability of Optimizing Text-based CAPTCHA Generation | [
{
"docid": "96d6173f58e36039577c8e94329861b2",
"text": "Reverse Turing tests, or CAPTCHAs, have become an ubiquitous defense used to protect open Web resources from being exploited at scale. An effective CAPTCHA resists existing mechanistic software solving, yet can be solved with high probability by a human being. In response, a robust solving ecosystem has emerged, reselling both automated solving technology and realtime human labor to bypass these protections. Thus, CAPTCHAs can increasingly be understood and evaluated in purely economic terms; the market price of a solution vs the monetizable value of the asset being protected. We examine the market-side of this question in depth, analyzing the behavior and dynamics of CAPTCHA-solving service providers, their price performance, and the underlying labor markets driving this economy.",
"title": ""
},
{
"docid": "36b4c028bcd92115107cf245c1e005c8",
"text": "CAPTCHA is now almost a standard security technology, and has found widespread application in commercial websites. Usability and robustness are two fundamental issues with CAPTCHA, and they often interconnect with each other. This paper discusses usability issues that should be considered and addressed in the design of CAPTCHAs. Some of these issues are intuitive, but some others have subtle implications for robustness (or security). A simple but novel framework for examining CAPTCHA usability is also proposed.",
"title": ""
}
] | [
{
"docid": "25196ef0c4385ec44b62183d9c282fc6",
"text": "It is not well understood how privacy concern and trust influence social interactions within social networking sites. An online survey of two popular social networking sites, Facebook and MySpace, compared perceptions of trust and privacy concern, along with willingness to share information and develop new relationships. Members of both sites reported similar levels of privacy concern. Facebook members expressed significantly greater trust in both Facebook and its members, and were more willing to share identifying information. Even so, MySpace members reported significantly more experience using the site to meet new people. These results suggest that in online interaction, trust is not as necessary in the building of new relationships as it is in face to face encounters. They also show that in an online site, the existence of trust and the willingness to share information do not automatically translate into new social interaction. This study demonstrates online relationships can develop in sites where perceived trust and privacy safeguards are weak.",
"title": ""
},
{
"docid": "528726032a0cfbd366c278cd247b0008",
"text": "It is difficult to develop a computational model that can accurately predict the quality of the video summary. This paper proposes a novel algorithm to summarize one-shot landmark videos. The algorithm can optimally combine multiple unedited consumer video skims into an aesthetically pleasing summary. In particular, to effectively select the representative key frames from multiple videos, an active learning algorithm is derived by taking advantage of the locality of the frames within each video. Toward a smooth video summary, we define skimlet, a video clip with adjustable length, starting frame, and positioned by each skim. Thereby, a probabilistic framework is developed to transfer the visual cues from a collection of aesthetically pleasing photos into the video summary. The length and the starting frame of each skimlet are calculated to maximally smoothen the video summary. At the same time, the unstable frames are removed from each skimlet. Experiments on multiple videos taken from different sceneries demonstrated the aesthetics, the smoothness, and the stability of the generated summary.",
"title": ""
},
{
"docid": "07409cd81cc5f0178724297245039878",
"text": "In recent years, the number of sensor network deployments for real-life applications has rapidly increased and it is expected to expand even more in the near future. Actually, for a credible deployment in a real environment three properties need to be fulfilled, i.e., energy efficiency, scalability and reliability. In this paper we focus on IEEE 802.15.4 sensor networks and show that they can suffer from a serious MAC unreliability problem, also in an ideal environment where transmission errors never occur. This problem arises whenever power management is enabled - for improving the energy efficiency - and results in a very low delivery ratio, even when the number of nodes in the network is very low (e.g., 5). We carried out an extensive analysis, based on simulations and real measurements, to investigate the ultimate reasons of this problem. We found that it is caused by the default MAC parameter setting suggested by the 802.15.4 standard. We also found that, with a more appropriate parameter setting, it is possible to achieve the desired level of reliability (as well as a better energy efficiency). However, in some scenarios this is possible only by choosing parameter values formally not allowed by the standard.",
"title": ""
},
{
"docid": "c70e11160c90bd67caa2294c499be711",
"text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.",
"title": ""
},
{
"docid": "d52efc862c68ec09a5ae3395464996ed",
"text": "The growth of digital video has given rise to a need for computational methods for evaluating the visual quality of digital video. We have developed a new digital video quality metric, which we call DVQ (Digital Video Quality). Here we provide a brief description of the metric, and give a preliminary report on its performance. DVQ accepts a pair of digital video sequences, and computes a measure of the magnitude of the visible difference between them. The metric is based on the Discrete Cosine Transform. It incorporates aspects of early visual processing, including light adaptation, luminance and chromatic channels, spatial and temporal filtering, spatial frequency channels, contrast masking, and probability summation. It also includes primitive dynamics of light adaptation and contrast masking. We have applied the metric to digital video sequences corrupted by various typical compression artifacts, and compared the results to quality ratings made by human observers.",
"title": ""
},
{
"docid": "49a041e18a063876dc595f33fe8239a8",
"text": "Significant vulnerabilities have recently been identified in collaborative filtering recommender systems. These vulnerabilities mostly emanate from the open nature of such systems and their reliance on userspecified judgments for building profiles. Attackers can easily introduce biased data in an attempt to force the system to “adapt” in a manner advantageous to them. Our research in secure personalization is examining a range of attack models, from the simple to the complex, and a variety of recommendation techniques. In this chapter, we explore an attack model that focuses on a subset of users with similar tastes and show that such an attack can be highly successful against both user-based and item-based collaborative filtering. We also introduce a detection model that can significantly decrease the impact of this attack.",
"title": ""
},
{
"docid": "6226b650540d812b6c40939a582331ef",
"text": "With an increasingly mobile society and the worldwide deployment of mobile and wireless networks, the wireless infrastructure can support many current and emerging healthcare applications. This could fulfill the vision of “Pervasive Healthcare” or healthcare to anyone, anytime, and anywhere by removing locational, time and other restraints while increasing both the coverage and the quality. In this paper, we present applications and requirements of pervasive healthcare, wireless networking solutions and several important research problems. The pervasive healthcare applications include pervasive health monitoring, intelligent emergency management system, pervasive healthcare data access, and ubiquitous mobile telemedicine. One major application in pervasive healthcare, termed comprehensive health monitoring is presented in significant details using wireless networking solutions of wireless LANs, ad hoc wireless networks, and, cellular/GSM/3G infrastructureoriented networks.Many interesting challenges of comprehensive wireless health monitoring, including context-awareness, reliability, and, autonomous and adaptable operation are also presented along with several high-level solutions. Several interesting research problems have been identified and presented for future research.",
"title": ""
},
{
"docid": "d86eb65183f059a4ca7cb0ad9190a0ca",
"text": "Different short circuits, load growth, generation shortage, and other faults which disturb the voltage and frequency stability are serious threats to the system security. The frequency and voltage instability causes dispersal of a power system into sub-systems, and leads to blackout as well as heavy damages of the system equipment. This paper presents a fast and optimal adaptive load shedding method, for isolated power system using Artificial Neural Networks (ANN). The proposed method is able to determine the necessary load shedding in all steps simultaneously and is much faster than conventional methods. This method has been tested on the New-England power system. The simulation results show that the proposed algorithm is fast, robust and optimal values of load shedding in different loading scenarios are obtained in comparison with conventional method.",
"title": ""
},
{
"docid": "cca431043c72db900f45e7b79bb9fb66",
"text": "During the past decade, there have been a variety of significant developments in data mining techniques. Some of these developments are implemented in customized service to develop customer relationship. Customized service is actually crucial in retail markets. Marketing managers can develop long-term and pleasant relationships with customers if they can detect and predict changes in customer behavior. In the dynamic retail market, understanding changes in customer behavior can help managers to establish effective promotion campaigns. This study integrates customer behavioral variables, demographic variables, and transaction database to establish a method of mining changes in customer behavior. For mining change patterns, two extended measures of similarity and unexpectedness are designed to analyze the degree of resemblance between patterns at different time periods. The proposed approach for mining changes in customer behavior can assist managers in developing better marketing strategies. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "28600f0ee7ca1128874e830e01a028de",
"text": "This paper presents and analyzes a three-tier architecture for collecting sensor data in sparse sensor networks. Our approach exploits the presence of mobile entities (called MULEs) present in the environment. When in close range, MULEs pick up data from the sensors, buffer it, and deliver it to wired access points. This can lead to substantial power savings at the sensors as they only have to transmit over a short-range. This paper focuses on a simple analytical model for understanding performance as system parameters are scaled. Our model assumes a two-dimensional random walk for mobility and incorporates key system variables such as number of MULEs, sensors and access points. The performance metrics observed are the data success rate (the fraction of generated data that reaches the access points), latency and the required buffer capacities on the sensors and the MULEs. The modeling and simulation results can be used for further analysis and provide certain guidelines for deployment of such systems. 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a992be26d6b41ee4d3a8f8fa7014727b",
"text": "In this paper, we develop a heart disease prediction model that can assist medical professionals in predicting heart disease status based on the clinical data of patients. Firstly, we select 14 important clinical features, i.e., age, sex, chest pain type, trestbps, cholesterol, fasting blood sugar, resting ecg, max heart rate, exercise induced angina, old peak, slope, number of vessels colored, thal and diagnosis of heart disease. Secondly, we develop an prediction model using J48 decision tree for classifying heart disease based on these clinical features against unpruned, pruned and pruned with reduced error pruning approach.. Finally, the accuracy of Pruned J48 Decision Tree with Reduced Error Pruning Approach is more better then the simple Pruned and Unpruned approach. The result obtained that which shows that fasting blood sugar is the most important attribute which gives better classification against the other attributes but its gives not better accuracy. Keywords—Data mining, Reduced Error Pruning, Gain Ratio and Decision Tree.",
"title": ""
},
{
"docid": "85ab2edb48dd57f259385399437ea8e9",
"text": "Training robust deep video representations has proven to be much more challenging than learning deep image representations. This is in part due to the enormous size of raw video streams and the high temporal redundancy; the true and interesting signal is often drowned in too much irrelevant data. Motivated by that the superfluous information can be reduced by up to two orders of magnitude by video compression (using H.264, HEVC, etc.), we propose to train a deep network directly on the compressed video. This representation has a higher information density, and we found the training to be easier. In addition, the signals in a compressed video provide free, albeit noisy, motion information. We propose novel techniques to use them effectively. Our approach is about 4.6 times faster than Res3D and 2.7 times faster than ResNet-152. On the task of action recognition, our approach outperforms all the other methods on the UCF-101, HMDB-51, and Charades dataset.",
"title": ""
},
{
"docid": "cb8a21bf8d0642ee9410419ecf472b21",
"text": "Sentiment analysis or opinion mining is one of the major tasks of NLP (Natural Language Processing). Sentiment analysis has gain much attention in recent years. In this paper, we aim to tackle the problem of sentiment polarity categorization, which is one of the fundamental problems of sentiment analysis. A general process for sentiment polarity categorization is proposed with detailed process descriptions. Data used in this study are online product reviews collected from Amazon.com. Experiments for both sentence-level categorization and review-level categorization are performed with promising outcomes. At last, we also give insight into our future work on sentiment analysis.",
"title": ""
},
{
"docid": "585ec3229d7458f5d6bca3c7936eb306",
"text": "Graph processing has gained renewed attention. The increasing large scale and wealth of connected data, such as those accrued by social network applications, demand the design of new techniques and platforms to efficiently derive actionable information from large scale graphs. Hybrid systems that host processing units optimized for both fast sequential processing and bulk processing (e.g., GPUaccelerated systems) have the potential to cope with the heterogeneous structure of real graphs and enable high performance graph processing. Reaching this point, however, poses multiple challenges. The heterogeneity of the processing elements (e.g., GPUs implement a different parallel processing model than CPUs and have much less memory) and the inherent irregularity of graph workloads require careful graph partitioning and load assignment. In particular, the workload generated by a partitioning scheme should match the strength of the processing element the partition is allocated to. This work explores the feasibility and quantifies the performance gains of such low-cost partitioning schemes. We propose to partition the workload between the two types of processing elements based on vertex connectivity. We show that such partitioning schemes offer a simple, yet efficient way to boost the overall performance of the hybrid system. Our evaluation illustrates that processing a 4-billion edges graph on a system with one CPU socket and one GPU, while offloading as little as 25% of the edges to the GPU, achieves 2x performance improvement over state-of-the-art implementations running on a dual-socket symmetric system. Moreover, for the same graph, a hybrid system with dualsocket and dual-GPU is capable of 1.13 Billion breadth-first search traversed edge per second, a performance rate that is competitive with the latest entries in the Graph500 list, yet at a much lower price point.",
"title": ""
},
{
"docid": "6c72b38246e35d1f49d7f55e89b42f21",
"text": "The success of IT project related to numerous factors. It had an important significance to find the critical factors for the success of project. Based on the general analysis of IT project management, this paper analyzed some factors of project management for successful IT project from the angle of modern project management. These factors include project participators, project communication, collaboration, and information sharing mechanism as well as project management process. In the end, it analyzed the function of each factor for a successful IT project. On behalf of the collective goal, by the use of the favorable project communication and collaboration, the project participants carry out successfully to the management of the process, which is significant to the project, and make project achieve success eventually.",
"title": ""
},
{
"docid": "0fafa2597726dfeb4d35721c478f1038",
"text": "Visual saliency models have enjoyed a big leap in performance in recent years, thanks to advances in deep learning and large scale annotated data. Despite enormous effort and huge breakthroughs, however, models still fall short in reaching human-level accuracy. In this work, I explore the landscape of the field emphasizing on new deep saliency models, benchmarks, and datasets. A large number of image and video saliency models are reviewed and compared over two image benchmarks and two large scale video datasets. Further, I identify factors that contribute to the gap between models and humans and discuss the remaining issues that need to be addressed to build the next generation of more powerful saliency models. Some specific questions that are addressed include: in what ways current models fail, how to remedy them, what can be learned from cognitive studies of attention, how explicit saliency judgments relate to fixations, how to conduct fair model comparison, and what are the emerging applications of saliency models.",
"title": ""
},
{
"docid": "fd18b3d4799d23735c48bff3da8fd5ff",
"text": "There is need for an Integrated Event Focused Crawling system to collect Web data about key events. When a disaster or other significant event occurs, many users try to locate the most up-to-date information about that event. Yet, there is little systematic collecting and archiving anywhere of event information. We propose intelligent event focused crawling for automatic event tracking and archiving, ultimately leading to effective access. We developed an event model that can capture key event information, and incorporated that model into a focused crawling algorithm. For the focused crawler to leverage the event model in predicting webpage relevance, we developed a function that measures the similarity between two event representations. We then conducted two series of experiments to evaluate our system about two recent events: California shooting and Brussels attack. The first experiment series evaluated the effectiveness of our proposed event model representation when assessing the relevance of webpages. Our event model-based representation outperformed the baseline method (topic-only); it showed better results in precision, recall, and F1-score with an improvement of 20% in F1-score. The second experiment series evaluated the effectiveness of the event model-based focused crawler for collecting relevant webpages from the WWW. Our event model-based focused crawler outperformed the state-of-the-art baseline focused crawler (best-first); it showed better results in harvest ratio with an average improvement of 40%.",
"title": ""
},
{
"docid": "4142b1fc9e37ffadc6950105c3d99749",
"text": "Just-noticeable distortion (JND), which refers to the maximum distortion that the human visual system (HVS) cannot perceive, plays an important role in perceptual image and video processing. In comparison with JND estimation for images, estimation of the JND profile for video needs to take into account the temporal HVS properties in addition to the spatial properties. In this paper, we develop a spatio-temporal model estimating JND in the discrete cosine transform domain. The proposed model incorporates the spatio-temporal contrast sensitivity function, the influence of eye movements, luminance adaptation, and contrast masking to be more consistent with human perception. It is capable of yielding JNDs for both still images and video with significant motion. The experiments conducted in this study have demonstrated that the JND values estimated for video sequences with moving objects by the model are in line with the HVS perception. The accurate JND estimation of the video towards the actual visibility bounds can be translated into resource savings (e.g., for bandwidth/storage or computation) and performance improvement in video coding and other visual processing tasks (such as perceptual quality evaluation, visual signal restoration/enhancement, watermarking, authentication, and error protection)",
"title": ""
},
{
"docid": "52d2004c762d4701ab275d9757c047fc",
"text": "Somatic mosaicism — the presence of genetically distinct populations of somatic cells in a given organism — is frequently masked, but it can also result in major phenotypic changes and reveal the expression of otherwise lethal genetic mutations. Mosaicism can be caused by DNA mutations, epigenetic alterations of DNA, chromosomal abnormalities and the spontaneous reversion of inherited mutations. In this review, we discuss the human disorders that result from somatic mosaicism, as well as the molecular genetic mechanisms by which they arise. Specifically, we emphasize the role of selection in the phenotypic manifestations of mosaicism.",
"title": ""
}
] | scidocsrr |
ec6484ba5c85d5feffa574b53588b534 | Houdini, an Annotation Assistant for ESC/Java | [
{
"docid": "cb1952a4931955856c6479d7054c57e7",
"text": "This paper presents a static race detection analysis for multithreaded Java programs. Our analysis is based on a formal type system that is capable of capturing many common synchronization patterns. These patterns include classes with internal synchronization, classes thatrequire client-side synchronization, and thread-local classes. Experience checking over 40,000 lines of Java code with the type system demonstrates that it is an effective approach for eliminating races conditions. On large examples, fewer than 20 additional type annotations per 1000 lines of code were required by the type checker, and we found a number of races in the standard Java libraries and other test programs.",
"title": ""
}
] | [
{
"docid": "3fcce3664db5812689c121138e2af280",
"text": "We examine and compare simulation-based algorithms for solving the agent scheduling problem in a multiskill call center. This problem consists in minimizing the total costs of agents under constraints on the expected service level per call type, per period, and aggregated. We propose a solution approach that combines simulation with integer or linear programming, with cut generation. In our numerical experiments with realistic problem instances, this approach performs better than all other methods proposed previously for this problem. We also show that the two-step approach, which is the standard method for solving this problem, sometimes yield solutions that are highly suboptimal and inferior to those obtained by our proposed method. 2009 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "e7968b6bfb3535907b380cfd93128b0e",
"text": "We present a novel solution to the problem of depth reconstruction from a single image. Single view 3D reconstruction is an ill-posed problem. We address this problem by using an example-based synthesis approach. Our method uses a database of objects from a single class (e.g. hands, human figures) containing example patches of feasible mappings from the appearance to the depth of each object. Given an image of a novel object, we combine the known depths of patches from similar objects to produce a plausible depth estimate. This is achieved by optimizing a global target function representing the likelihood of the candidate depth. We demonstrate how the variability of 3D shapes and their poses can be handled by updating the example database on-the-fly. In addition, we show how we can employ our method for the novel task of recovering an estimate for the occluded backside of the imaged objects. Finally, we present results on a variety of object classes and a range of imaging conditions.",
"title": ""
},
{
"docid": "89596e6eedbc1f13f63ea144b79fdc64",
"text": "This paper describes our work in integrating three different lexical resources: FrameNet, VerbNet, and WordNet, into a unified, richer knowledge-base, to the end of enabling more robust semantic parsing. The construction of each of these lexical resources has required many years of laborious human effort, and they all have their strengths and shortcomings. By linking them together, we build an improved resource in which (1) the coverage of FrameNet is extended, (2) the VerbNet lexicon is augmented with frame semantics, and (3) selectional restrictions are implemented using WordNet semantic classes. The synergistic exploitation of various lexical resources is crucial for many complex language processing applications, and we prove it once again effective in building a robust semantic parser.",
"title": ""
},
{
"docid": "24e31f9cdedcc7aa8f9489db9db13f94",
"text": "A basic ingredient in transformational leadership development consists in identifying leadership qualities via distribution of the multifactor leadership questionnaire (MLQ) to followers of the target leaders. It is vital that the MLQ yields an accurate and unbiased assessment of leaders on the various leadership dimensions. This article focuses on two sources of bias which may occur in identifying leadership qualities. First, when followers assess the strengths and weaknesses of their leaders, they may have difficulty in differentiating between the various transformational and transactional leadership behaviours. It is found that this is only the case for the transformational leadership attributes because the four transformational leadership dimensions measured by the MLQ correlate highly and cluster into one factor. MLQ ratings on the three transactional leadership dimensions are found not to be interrelated and show evidence for three distinct factors: contingent reward, active management-by-exception and passive leadership. Second, social desirability does not seem to be a strong biasing factor, although the transformational leadership scale is somewhat more socially desirable. These findings emphasize that the measurement of so-called “new” leadership qualities remains a controversial issue in leadership development. Practical implications of these findings and avenues for future research are also discussed.",
"title": ""
},
{
"docid": "44e7e452b9b27d2028d15c88256eff30",
"text": "In social media communication, multilingual speakers often switch between languages, and, in such an environment, automatic language identification becomes both a necessary and challenging task. In this paper, we describe our work in progress on the problem of automatic language identification for the language of social media. We describe a new dataset that we are in the process of creating, which contains Facebook posts and comments that exhibit code mixing between Bengali, English and Hindi. We also present some preliminary word-level language identification experiments using this dataset. Different techniques are employed, including a simple unsupervised dictionary-based approach, supervised word-level classification with and without contextual clues, and sequence labelling using Conditional Random Fields. We find that the dictionary-based approach is surpassed by supervised classification and sequence labelling, and that it is important to take contextual clues into consideration.",
"title": ""
},
{
"docid": "32fd7a91091f74a5ea55226aa44403d3",
"text": "Previous research has shown that patients with schizophrenia are impaired in reinforcement learning tasks. However, behavioral learning curves in such tasks originate from the interaction of multiple neural processes, including the basal ganglia- and dopamine-dependent reinforcement learning (RL) system, but also prefrontal cortex-dependent cognitive strategies involving working memory (WM). Thus, it is unclear which specific system induces impairments in schizophrenia. We recently developed a task and computational model allowing us to separately assess the roles of RL (slow, cumulative learning) mechanisms versus WM (fast but capacity-limited) mechanisms in healthy adult human subjects. Here, we used this task to assess patients' specific sources of impairments in learning. In 15 separate blocks, subjects learned to pick one of three actions for stimuli. The number of stimuli to learn in each block varied from two to six, allowing us to separate influences of capacity-limited WM from the incremental RL system. As expected, both patients (n = 49) and healthy controls (n = 36) showed effects of set size and delay between stimulus repetitions, confirming the presence of working memory effects. Patients performed significantly worse than controls overall, but computational model fits and behavioral analyses indicate that these deficits could be entirely accounted for by changes in WM parameters (capacity and reliability), whereas RL processes were spared. These results suggest that the working memory system contributes strongly to learning impairments in schizophrenia.",
"title": ""
},
{
"docid": "3df8f7669b6a9d3509cf72eaa8d94248",
"text": "Current forensic tools for examination of embedded systems like mobile phones and PDA’s mostly perform data extraction on a logical level and do not consider the type of storage media during data analysis. This paper suggests a low level approach for the forensic examination of flash memories and describes three low-level data acquisition methods for making full memory copies of flash memory devices. Results are presented of a file system study in which USB memory sticks from 45 different make and models were used. For different mobile phones is shown how full memory copies of their flash memories can be made and which steps are needed to translate the extracted data into a format that can be understood by common forensic media analysis tools. Artifacts, caused by flash specific operations like block erasing and wear leveling, are discussed and directions are given for enhanced data recovery and analysis on data originating from flash memory.",
"title": ""
},
{
"docid": "7a180e503a0b159d545047443524a05a",
"text": "We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone.",
"title": ""
},
{
"docid": "5c8923335dd4ee4c2123b5b3245fb595",
"text": "Virtualization is a key enabler of Cloud computing. Due to the numerous vulnerabilities in current implementations of virtualization, security is the major concern of Cloud computing. In this paper, we propose an enhanced security framework to detect intrusions at the virtual network layer of Cloud. It combines signature and anomaly based techniques to detect possible attacks. It uses different classifiers viz; naive bayes, decision tree, random forest, extra trees and linear discriminant analysis for an efficient and effective detection of intrusions. To detect distributed attacks at each cluster and at whole Cloud, it collects intrusion evidences from each region of Cloud and applies Dempster-Shafer theory (DST) for final decision making. We analyze the proposed security framework in terms of Cloud IDS requirements through offline simulation using different intrusion datasets.",
"title": ""
},
{
"docid": "24411f7fe027e5eb617cf48c3e36ce05",
"text": "Reliability assessment of distribution system, based on historical data and probabilistic methods, leads to an unreliable estimation of reliability indices since the data for the distribution components are usually inaccurate or unavailable. Fuzzy logic is an efficient method to deal with the uncertainty in reliability inputs. In this paper, the ENS index along with other commonly used indices in reliability assessment are evaluated for the distribution system using fuzzy logic. Accordingly, the influential variables on the failure rate and outage duration time of the distribution components, which are natural or human-made, are explained using proposed fuzzy membership functions. The reliability indices are calculated and compared for different cases of the system operations by simulation on the IEEE RBTS Bus 2. The results of simulation show how utilities can significantly improve the reliability of their distribution system by considering the risk of the influential variables.",
"title": ""
},
{
"docid": "ef771fa11d9f597f94cee5e64fcf9fd6",
"text": "The principle of artificial curiosity directs active exploration towards the most informative or most interesting data. We show its usefulness for global black box optimization when data point evaluations are expensive. Gaussian process regression is used to model the fitness function based on all available observations so far. For each candidate point this model estimates expected fitness reduction, and yields a novel closed-form expression of expected information gain. A new type of Pareto-front algorithm continually pushes the boundary of candidates not dominated by any other known data according to both criteria, using multi-objective evolutionary search. This makes the exploration-exploitation trade-off explicit, and permits maximally informed data selection. We illustrate the robustness of our approach in a number of experimental scenarios.",
"title": ""
},
{
"docid": "53b6315bfb8fcfef651dd83138b11378",
"text": "We illustrate the correspondence between uncertainty sets in robust optimization and some popular risk measures in finance, and show how robust optimization can be used to generalize the concepts of these risk measures. We also show that by using properly defined uncertainty sets in robust optimization models, one can construct coherent risk measures. Our results have implications for efficient portfolio optimization under different measures of risk. Department of Mathematics, National University of Singapore, Singapore 117543. Email: matkbn@nus.edu.sg. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS startup grants R-146-050-070-133 & R146-050-070-101. Division of Mathematics and Sciences, Babson College, Babson Park, MA 02457, USA. E-mail: dpachamanova@babson.edu. Research supported by the Gill grant from the Babson College Board of Research. NUS Business School, National University of Singapore. Email: dscsimm@nus.edu.sg. The research of the author was partially supported by Singapore-MIT Alliance, NUS Risk Management Institute and NUS academic research grant R-314-000-066-122 and R-314-000-068-122.",
"title": ""
},
{
"docid": "913e167521f0ce7a7f1fb0deac58ae9c",
"text": "Prospect theory is a descriptive theory of how individuals choose among risky alternatives. The theory challenged the conventional wisdom that economic decision makers are rational expected utility maximizers. We present a number of empirical demonstrations that are inconsistent with the classical theory, expected utility, but can be explained by prospect theory. We then discuss the prospect theory model, including the value function and the probability weighting function. We conclude by highlighting several applications of the theory.",
"title": ""
},
{
"docid": "cbaf7cd4e17c420b7546d132959b3283",
"text": "User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.",
"title": ""
},
{
"docid": "c8f9d10de0d961e4ee14b6b118b5f89a",
"text": "Deep learning is having a transformative effect on how sensor data are processed and interpreted. As a result, it is becoming increasingly feasible to build sensor-based computational models that are much more robust to real-world noise and complexity than previously possible. It is paramount that these innovations reach mobile and embedded devices that often rely on understanding and reacting to sensor data. However, deep models conventionally demand a level of system resources (e.g., memory and computation) that makes them problematic to run directly on constrained devices. In this work, we present the DeepX toolkit (DXTK); an opensource collection of software components for simplifying the execution of deep models on resource-sensitive platforms. DXTK contains a number of pre-trained low-resource deep models that users can quickly adopt and integrate for their particular application needs. It also offers a range of runtime options for executing deep models on range of devices including both Android and Linux variants. But the heart of DXTK is a series of optimization techniques (viz. weight/sparse factorization, convolution separation, precision scaling, and parameter cleaning). Each technique offers a complementary approach to shaping system resource requirements, and is compatible with deep and convolutional neural networks. We hope that DXTK proves to be a valuable resource for the community, and accelerates the adoption and study of resource-constrained deep learning.",
"title": ""
},
{
"docid": "ddf09617b266d483d5e3ab3dcb479b69",
"text": "Writing a research article can be a daunting task, and often, writers are not certain what should be included and how the information should be conveyed. Fortunately, scientific and engineering journal articles follow an accepted format. They contain an introduction which includes a statement of the problem, a literature review, and a general outline of the paper, a methods section detailing the methods used, separate or combined results, discussion and application sections, and a final summary and conclusions section. Here, each of these elements is described in detail using examples from the published literature as illustration. Guidance is also provided with respect to style, getting started, and the revision/review process.",
"title": ""
},
{
"docid": "16de36d6bf6db7c294287355a44d0f61",
"text": "The Computational Linguistics (CL) Summarization Pilot Task was created to encourage a community effort to address the research problem of summarizing research articles as “faceted summaries” in the domain of computational linguistics. In this pilot stage, a handannotated set of citing papers was provided for ten reference papers to help in automating the citation span and discourse facet identification problems. This paper details the corpus construction efforts by the organizers and the participating teams, who also participated in the task-based evaluation. The annotated development corpus used for this pilot task is publicly available at: https://github.com/WING-",
"title": ""
},
{
"docid": "c718b84951edfe294b8287ef3f5a9c6a",
"text": "Dynamic Searchable Symmetric Encryption (DSSE) allows a client to perform keyword searches over encrypted files via an encrypted data structure. Despite its merits, DSSE leaks search and update patterns when the client accesses the encrypted data structure. These leakages may create severe privacy problems as already shown, for example, in recent statistical attacks on DSSE. While Oblivious Random Access Memory (ORAM) can hide such access patterns, it incurs significant communication overhead and, therefore, it is not yet fully practical for cloud computing systems. Hence, there is a critical need to develop private access schemes over the encrypted data structure that can seal the leakages of DSSE while achieving practical search/update operations.\n In this paper, we propose a new oblivious access scheme over the encrypted data structure for searchable encryption purposes, that we call <u>D</u>istributed <u>O</u>blivious <u>D</u>ata structure <u>DSSE</u> (DOD-DSSE). The main idea is to create a distributed encrypted incidence matrix on two non-colluding servers such that no arbitrary queries on these servers can be linked to each other. This strategy prevents not only recent statistical attacks on the encrypted data structure but also other potential threats exploiting query linkability. Our security analysis proves that DOD-DSSE ensures the unlink-ability of queries and, therefore, offers much higher security than traditional DSSE. At the same time, our performance evaluation demonstrates that DOD-DSSE is two orders of magnitude faster than ORAM-based techniques (e.g., Path ORAM), since it only incurs a small-constant number of communication overhead. That is, we deployed DOD-DSSE on geographically distributed Amazon EC2 servers, and showed that, a search/update operation on a very large dataset only takes around one second with DOD-DSSE, while it takes 3 to 13 minutes with Path ORAM-based methods.",
"title": ""
},
{
"docid": "6dfb4c016db41a27587ef08011a7cf0e",
"text": "The objective of this work is to detect shadows in images. We pose this as the problem of labeling image regions, where each region corresponds to a group of superpixels. To predict the label of each region, we train a kernel Least-Squares Support Vector Machine (LSSVM) for separating shadow and non-shadow regions. The parameters of the kernel and the classifier are jointly learned to minimize the leave-one-out cross validation error. Optimizing the leave-one-out cross validation error is typically difficult, but it can be done efficiently in our framework. Experiments on two challenging shadow datasets, UCF and UIUC, show that our region classifier outperforms more complex methods. We further enhance the performance of the region classifier by embedding it in a Markov Random Field (MRF) framework and adding pairwise contextual cues. This leads to a method that outperforms the state-of-the-art for shadow detection. In addition we propose a new method for shadow removal based on region relighting. For each shadow region we use a trained classifier to identify a neighboring lit region of the same material. Given a pair of lit-shadow regions we perform a region relighting transformation based on histogram matching of luminance values between the shadow region and the lit region. Once a shadow is detected, we demonstrate that our shadow removal approach produces results that outperform the state of the art by evaluating our method using a publicly available benchmark dataset.",
"title": ""
},
{
"docid": "b82f7b7a317715ba0c7ca87db92c7bf6",
"text": "Regions of hypoxia in tumours can be modelled in vitro in 2D cell cultures with a hypoxic chamber or incubator in which oxygen levels can be regulated. Although this system is useful in many respects, it disregards the additional physiological gradients of the hypoxic microenvironment, which result in reduced nutrients and more acidic pH. Another approach to hypoxia modelling is to use three-dimensional spheroid cultures. In spheroids, the physiological gradients of the hypoxic tumour microenvironment can be inexpensively modelled and explored. In addition, spheroids offer the advantage of more representative modelling of tumour therapy responses compared with 2D culture. Here, we review the use of spheroids in hypoxia tumour biology research and highlight the different methodologies for spheroid formation and how to obtain uniformity. We explore the challenge of spheroid analyses and how to determine the effect on the hypoxic versus normoxic components of spheroids. We discuss the use of high-throughput analyses in hypoxia screening of spheroids. Furthermore, we examine the use of mathematical modelling of spheroids to understand more fully the hypoxic tumour microenvironment.",
"title": ""
}
] | scidocsrr |
bf2e1edf4dde4e9429269f3d342102fe | False Confessions : Causes , Consequences , and Implications for Reform | [
{
"docid": "da6a74341c8b12658aea2a267b7a0389",
"text": "An experiment demonstrated that false incriminating evidence can lead people to accept guilt for a crime they did not commit. Subjects in a fastor slow-paced reaction time task were accused of damaging a computer by pressing the wrong key. All were truly innocent and initially denied the charge. A confederate then said she saw the subject hit the key or did not see the subject hit the key. Compared with subjects in the slowpacelno-witness group, those in the fast-pace/witness group were more likely to sign a confession, internalize guilt for the event, and confabulate details in memory consistent with that belief Both legal and conceptual implications are discussed. In criminal law, confession evidence is a potent weapon for the prosecution and a recurring source of controversy. Whether a suspect's self-incriminating statement was voluntary or coerced and whether a suspect was of sound mind are just two of the issues that trial judges and juries consider on a routine basis. To guard citizens against violations of due process and to minimize the risk that the innocent would confess to crimes they did not commit, the courts have erected guidelines for the admissibility of confession evidence. Although there is no simple litmus test, confessions are typically excluded from triai if elicited by physical violence, a threat of harm or punishment, or a promise of immunity or leniency, or without the suspect being notified of his or her Miranda rights. To understand the psychology of criminal confessions, three questions need to be addressed: First, how do police interrogators elicit self-incriminating statements (i.e., what means of social influence do they use)? Second, what effects do these methods have (i.e., do innocent suspects ever confess to crimes they did not commit)? Third, when a coerced confession is retracted and later presented at trial, do juries sufficiently discount the evidence in accordance with the law? General reviews of relevant case law and research are available elsewhere (Gudjonsson, 1992; Wrightsman & Kassin, 1993). The present research addresses the first two questions. Informed by developments in case law, the police use various methods of interrogation—including the presentation of false evidence (e.g., fake polygraph, fingerprints, or other forensic test results; staged eyewitness identifications), appeals to God and religion, feigned friendship, and the use of prison informants. A number of manuals are available to advise detectives on how to extract confessions from reluctant crime suspects (Aubry & Caputo, 1965; O'Hara & O'Hara, 1981). The most popular manual is Inbau, Reid, and Buckley's (1986) Criminal Interrogation and Confessions, originally published in 1%2, and now in its third edition. Address correspondence to Saul Kassin, Department of Psychology, Williams College, WllUamstown, MA 01267. After advising interrogators to set aside a bare, soundproof room absent of social support and distraction, Inbau et al, (1986) describe in detail a nine-step procedure consisting of various specific ploys. In general, two types of approaches can be distinguished. One is minimization, a technique in which the detective lulls Che suspect into a false sense of security by providing face-saving excuses, citing mitigating circumstances, blaming the victim, and underplaying the charges. The second approach is one of maximization, in which the interrogator uses scare tactics by exaggerating or falsifying the characterization of evidence, the seriousness of the offense, and the magnitude of the charges. In a recent study (Kassin & McNall, 1991), subjects read interrogation transcripts in which these ploys were used and estimated the severity of the sentence likely to be received. The results indicated that minimization communicated an implicit offer of leniency, comparable to that estimated in an explicit-promise condition, whereas maximization implied a threat of harsh punishment, comparable to that found in an explicit-threat condition. Yet although American courts routinely exclude confessions elicited by explicit threats and promises, they admit those produced by contingencies that are pragmatically implied. Although police often use coercive methods of interrogation, research suggests that juries are prone to convict defendants who confess in these situations. In the case of Arizona v. Fulminante (1991), the U.S. Supreme Court ruled that under certain conditions, an improperly admitted coerced confession may be considered upon appeal to have been nonprejudicial, or \"harmless error.\" Yet mock-jury research shows that people find it hard to believe that anyone would confess to a crime that he or she did not commit (Kassin & Wrightsman, 1980, 1981; Sukel & Kassin, 1994). Still, it happens. One cannot estimate the prevalence of the problem, which has never been systematically examined, but there are numerous documented instances on record (Bedau & Radelet, 1987; Borchard, 1932; Rattner, 1988). Indeed, one can distinguish three types of false confession (Kassin & Wrightsman, 1985): voluntary (in which a subject confesses in the absence of extemal pressure), coercedcompliant (in which a suspect confesses only to escape an aversive interrogation, secure a promised benefit, or avoid a threatened harm), and coerced-internalized (in which a suspect actually comes to believe that he or she is guilty of the crime). This last type of false confession seems most unlikely, but a number of recent cases have come to light in which the police had seized a suspect who was vulnerable (by virtue of his or her youth, intelligence, personality, stress, or mental state) and used false evidence to convince the beleaguered suspect that he or she was guilty. In one case that received a great deal of attention, for example, Paul Ingram was charged with rape and a host of Satanic cult crimes that included the slaughter of newbom babies. During 6 months of interrogation, he was hypnoVOL. 7, NO. 3, MAY 1996 Copyright © 1996 American Psychological Society 125 PSYCHOLOGICAL SCIENCE",
"title": ""
},
{
"docid": "6103a365705a6083e40bb0ca27f6ca78",
"text": "Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand. The author reviews evidence of such a bias in a variety of guises and gives examples of its operation in several practical contexts. Possible explanations are considered, and the question of its utility or disutility is discussed.",
"title": ""
}
] | [
{
"docid": "df2b7382996a5bedb592b26bc866fd19",
"text": "BACKGROUND/AIMS\nTo investigate the possible clinical risk factors contributing to PGS after subtotal gastrectomy.\n\n\nMETHODOLOGY\nThe clinical data of 422 patients administering subtotal gastrectomy in our hospital were reviewed retrospectively from Jan, 1, 2005 to May, 1, 2012.\n\n\nRESULTS\nThe higher morbility of PGS were found in the patients whose age were over 65 years, combining with anxiety disorder or diabetes mellitus, with low-albuminemia in perioperative period or having pyloric obstruction in preoperative period, administering Billroth II gastroenterostomy, whose operation time over 4 hours, using patient-controlled analgesia, injecting liquid per day over 3500 ml.\n\n\nCONCLUSION\nThe clinical factors referred previously maybe the identified risk factors of PGS after subtotal gastrectomy, avoiding these clinical factors in perioperative period would reduce the occurrences of PGS after subtotal gastrectomy.",
"title": ""
},
{
"docid": "1836291f68e18f8975803f6acbb302be",
"text": "We review key challenges of developing spoken dialog systems that can engage in interactions with one or multiple participants in relatively unconstrained environments. We outline a set of core competencies for open-world dialog, and describe three prototype systems. The systems are built on a common underlying conversational framework which integrates an array of predictive models and component technologies, including speech recognition, head and pose tracking, probabilistic models for scene analysis, multiparty engagement and turn taking, and inferences about user goals and activities. We discuss the current models and showcase their function by means of a sample recorded interaction, and we review results from an observational study of open-world, multiparty dialog in the wild.",
"title": ""
},
{
"docid": "1945d4663a49a5e1249e43dc7f64d15b",
"text": "The current generation of adolescents grows up in a media-saturated world. However, it is unclear how media influences the maturational trajectories of brain regions involved in social interactions. Here we review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use. We argue that adolescents are highly sensitive to acceptance and rejection through social media, and that their heightened emotional sensitivity and protracted development of reflective processing and cognitive control may make them specifically reactive to emotion-arousing media. This review illustrates how neuroscience may help understand the mutual influence of media and peers on adolescents’ well-being and opinion formation. The current generation of adolescents grows up in a media-saturated world. Here, Crone and Konijn review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use.",
"title": ""
},
{
"docid": "8b43d399ec64a1d89a62a744720f453e",
"text": "Object tracking is one of the key components of the perception system of autonomous cars and ADASs. With tracking, an ego-vehicle can make a prediction about the location of surrounding objects in the next time epoch and plan for next actions. Object tracking algorithms typically rely on sensory data (from RGB cameras or LIDAR). In fact, the integration of 2D-RGB camera images and 3D-LIDAR data can provide some distinct benefits. This paper proposes a 3D object tracking algorithm using a 3D-LIDAR, an RGB camera and INS (GPS/IMU) sensors data by analyzing sequential 2D-RGB, 3D point-cloud, and the ego-vehicle's localization data and outputs the trajectory of the tracked object, an estimation of its current velocity, and its predicted location in the 3D world coordinate system in the next time-step. Tracking starts with a known initial 3D bounding box for the object. Two parallel mean-shift algorithms are applied for object detection and localization in the 2D image and 3D point-cloud, followed by a robust 2D/3D Kalman filter based fusion and tracking. Reported results, from both quantitative and qualitative experiments using the KITTI database demonstrate the applicability and efficiency of the proposed approach in driving environments.",
"title": ""
},
{
"docid": "8c3a76aa28177f64e72c52df5ff4a679",
"text": "Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards lowor high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both lowand highorder feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its “wide” and “deep” parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.",
"title": ""
},
{
"docid": "28415a26b69057231f1cd063e3dbed40",
"text": "OBJECTIVE\nTo determine if ovariectomy (OVE) is a safe alternative to ovariohysterectomy (OVH) for canine gonadectomy.\n\n\nSTUDY DESIGN\nLiterature review.\n\n\nMETHODS\nAn on-line bibliographic search in MEDLINE and PubMed was performed in December 2004, covering the period 1969-2004. Relevant studies were compared and evaluated with regard to study design, surgical technique, and both short-term and long-term follow-up.\n\n\nCONCLUSIONS\nOVH is technically more complicated, time consuming, and is probably associated with greater morbidity (larger incision, more intraoperative trauma, increased discomfort) compared with OVE. No significant differences between techniques were observed for incidence of long-term urogenital problems, including endometritis/pyometra and urinary incontinence, making OVE the preferred method of gonadectomy in the healthy bitch.\n\n\nCLINICAL RELEVANCE\nCanine OVE can replace OVH as the procedure of choice for routine neutering of healthy female dogs.",
"title": ""
},
{
"docid": "9af350b2d1e5b00df37ab8bd5b8f1f0b",
"text": "Memory access latency has significant impact on application performance. Unfortunately, the random access latency of DRAM has been scaling relatively slowly, and often directly affects the critical path of execution, especially for applications with insufficient locality or memory-level parallelism. The existing low-latency DRAM organizations either incur significant area overhead or burden the software stack with non-uniform access latency. This paper proposes SALAD, a new DRAM device architecture that provides symmetric access l atency with asymmetric DRAM bank organizations. Since local banks have lower data transfer time due to their proximity to the I/O pads, SALAD applies high aspect-ratio (i.e., low-latency) mats only to remote banks to offset the difference in data transfer time, thus providing uniformly low access time (tAC) over the whole device. Our evaluation demonstrates that SALAD improves the IPC by 13 percent (10 percent) without any software modifications, while incurring only 6 percent (3 percent) area overhead.",
"title": ""
},
{
"docid": "b36341d38ca1484fb1ebb15f1836fa3b",
"text": "This paper addresses the important problem of discerning hateful content in social media. We propose a detection scheme that is an ensemble of Recurrent Neural Network (RNN) classifiers, and it incorporates various features associated with user-related information, such as the users’ tendency towards racism or sexism. This data is fed as input to the above classifiers along with the word frequency vectors derived from the textual content. We evaluate our approach on a publicly available corpus of 16k tweets, and the results demonstrate its effectiveness in comparison to existing state-of-the-art solutions. More specifically, our scheme can successfully distinguish racism and sexism messages from normal text, and achieve higher classification quality than current state-of-the-art algorithms.",
"title": ""
},
{
"docid": "72138b8acfb7c9e11cfd92c0b78a737c",
"text": "We study the task of entity linking for tweets, which tries to associate each mention in a tweet with a knowledge base entry. Two main challenges of this task are the dearth of information in a single tweet and the rich entity mention variations. To address these challenges, we propose a collective inference method that simultaneously resolves a set of mentions. Particularly, our model integrates three kinds of similarities, i.e., mention-entry similarity, entry-entry similarity, and mention-mention similarity, to enrich the context for entity linking, and to address irregular mentions that are not covered by the entity-variation dictionary. We evaluate our method on a publicly available data set and demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "a786837b12c07039d4eca34c02e5c7d6",
"text": "The wafer level package (WLP) is a cost-effective solution for electronic package, and it has been increasingly applied during recent years. In this study, a new packaging technology which retains the advantages of WLP, the panel level package (PLP) technology, is proposed to further obtain the capability of signals fan-out for the fine-pitched integrated circuit (IC). In the PLP, the filler material is selected to fill the trench around the chip and provide a smooth surface for the redistribution lines. Therefore, the solder bumps could be located on both the filler and the chip surface, and the pitch of the chip side is fanned-out. In our previous research, it was found that the lifetime of solder joints in PLP can easily pass 3,500 cycles. The outstanding performance is explained by the application of a soft filler and a lamination material. However, it is also learned that the deformation of the lamination material during thermal loading may affect the reliability of the adjacent metal trace. In this study, the material effects of the proposed PLP technology are investigated and discussed through finite element analysis (FEA). A factorial analysis with three levels and three factors (the chip carrier, the lamination, and the filler material) is performed to obtain sensitivity information. Based on the results, the suggested combinations of packaging material in the PLP are provided. The reliability of the metal trace can be effectively improved by means of wisely applying materials in the PLP, and therefore, the PLP technology is expected to have a high potential for various applications in the near future.",
"title": ""
},
{
"docid": "c5c205c8a1fdd6f6def3e28b6477ecec",
"text": "The growth and popularity of Internet applications has reinforced the need for effective information filtering techniques. The collaborative filtering approach is now a popular choice and has been implemented in many on-line systems. While many researchers have proposed and compared the performance of various collaborative filtering algorithms, one important performance measure has been omitted from the research to date that is the robustness of the algorithm. In essence, robustness measures the power of the algorithm to make good predictions in the presence of noisy data. In this paper, we argue that robustness is an important system characteristic, and that it must be considered from the point-of-view of potential attacks that could be made on a system by malicious users. We propose a definition for system robustness, and identify system characteristics that influence robustness. Several attack strategies are described in detail, and experimental results are presented for the scenarios outlined.",
"title": ""
},
{
"docid": "57d3505a655e9c0efdc32101fd09b192",
"text": "POX is a Python based open source OpenFlow/Software Defined Networking (SDN) Controller. POX is used for faster development and prototyping of new network applications. POX controller comes pre installed with the mininet virtual machine. Using POX controller you can turn dumb openflow devices into hub, switch, load balancer, firewall devices. The POX controller allows easy way to run OpenFlow/SDN experiments. POX can be passed different parameters according to real or experimental topologies, thus allowing you to run experiments on real hardware, testbeds or in mininet emulator. In this paper, first section will contain introduction about POX, OpenFlow and SDN, then discussion about relationship between POX and Mininet. Final Sections will be regarding creating and verifying behavior of network applications in POX.",
"title": ""
},
{
"docid": "7d03c3e0e20b825809bebb5b2da1baed",
"text": "Flexoelectricity and the concomitant emergence of electromechanical size-effects at the nanoscale have been recently exploited to propose tantalizing concepts such as the creation of “apparently piezoelectric” materials without piezoelectric materials, e.g. graphene, emergence of “giant” piezoelectricity at the nanoscale, enhanced energy harvesting, among others. The aforementioned developments pertain primarily to hard ceramic crystals. In this work, we develop a nonlinear theoretical framework for flexoelectricity in soft materials. Using the concept of soft electret materials, we illustrate an interesting nonlinear interplay between the so-called Maxwell stress effect and flexoelectricity, and propose the design of a novel class of apparently piezoelectric materials whose constituents are intrinsically non-piezoelectric. In particular, we show that the electret-Maxwell stress based mechanism can be combined with flexoelectricity to achieve unprecedentedly high values of electromechanical coupling. Flexoelectricity is also important for a special class of soft materials: biological membranes. In this context, flexoelectricity manifests itself as the development of polarization upon changes in curvature. Flexoelectricity is found to be important in a number of biological functions including hearing, ion transport and in some situations where mechanotransduction is necessary. In this work, we present a simple linearized theory of flexoelectricity in biological membranes and some illustrative examples. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "438093b14f983499ada7ce392ba27664",
"text": "The spline under tension was introduced by Schweikert in an attempt to imitate cubic splines but avoid the spurious critical points they induce. The defining equations are presented here, together with an efficient method for determining the necessary parameters and computing the resultant spline. The standard scalar-valued curve fitting problem is discussed, as well as the fitting of open and closed curves in the plane. The use of these curves and the importance of the tension in the fitting of contour lines are mentioned as application.",
"title": ""
},
{
"docid": "e5ecbd3728e93badd4cfbf5eef6957f9",
"text": "Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.",
"title": ""
},
{
"docid": "1b5a5a9c08cb3054d1201dae0d1aca95",
"text": "The exponential increase of the traffic volume makes Distributed Denial-of-Service (DDoS) attacks a top security threat to service providers. Existing DDoS defense mechanisms lack resources and flexibility to cope with attacks by themselves, and by utilizing other’s companies resources, the burden of the mitigation can be shared. Technologies as blockchain and smart contracts allow distributing attack information across multiple domains, while SDN (Software-Defined Networking) and NFV (Network Function Virtualization) enables to scale defense capabilities on demand for a single network domain. This proposal presents the design of a novel architecture combining these elements and introducing novel opportunities for flexible and efficient DDoS mitigation solutions across multiple domains.",
"title": ""
},
{
"docid": "924146534d348e7a44970b1d78c97e9c",
"text": "Little is known of the extent to which heterosexual couples are satisfied with their current frequency of sex and the degree to which this predicts overall sexual and relationship satisfaction. A population-based survey of 4,290 men and 4,366 women was conducted among Australians aged 16 to 64 years from a range of sociodemographic backgrounds, of whom 3,240 men and 3,304 women were in regular heterosexual relationships. Only 46% of men and 58% of women were satisfied with their current frequency of sex. Dissatisfied men were overwhelmingly likely to desire sex more frequently; among dissatisfied women, only two thirds wanted sex more frequently. Age was a significant factor but only for men, with those aged 35-44 years tending to be least satisfied. Men and women who were dissatisfied with their frequency of sex were also more likely to express overall lower sexual and relationship satisfaction. The authors' findings not only highlight desired frequency of sex as a major factor in satisfaction, but also reveal important gender and other sociodemographic differences that need to be taken into account by researchers and therapists seeking to understand and improve sexual and relationship satisfaction among heterosexual couples. Other issues such as length of time spent having sex and practices engaged in may also be relevant, particularly for women.",
"title": ""
},
{
"docid": "6347b642cec08bf062f6e5594f805bd3",
"text": "Using a multimethod approach, the authors conducted 4 studies to test life span hypotheses about goal orientations across adulthood. Confirming expectations, in Studies 1 and 2 younger adults reported a primary growth orientation in their goals, whereas older adults reported a stronger orientation toward maintenance and loss prevention. Orientation toward prevention of loss correlated negatively with well-being in younger adults. In older adults, orientation toward maintenance was positively associated with well-being. Studies 3 and 4 extend findings of a self-reported shift in goal orientation to the level of behavioral choice involving cognitive and physical fitness goals. Studies 3 and 4 also examine the role of expected resource demands. The shift in goal orientation is discussed as an adaptive mechanism to manage changing opportunities and constraints across adulthood.",
"title": ""
},
{
"docid": "7593c8e9eb1520f65d7780efbbcedd7d",
"text": "We show how to achieve better illumination estimates for color constancy by combining the results of several existing algorithms. We consider committee methods based on both linear and non–linear ways of combining the illumination estimates from the original set of color constancy algorithms. Committees of grayworld, white patch and neural net methods are tested. The committee results are always more accurate than the estimates of any of the other algorithms taken in isolation.",
"title": ""
},
{
"docid": "eebcb9e0e2f08d91174b8476e580e8b6",
"text": "Plants are recognized in the pharmaceutical industry for their broad structural diversity as well as their wide range of pharmacological activities. The biologically active compounds present in plants are called phytochemicals. These phytochemicals are derived from various parts of plants such as leaves, flowers, seeds, barks, roots and pulps. These phytochemicals are used as sources of direct medicinal agents. They serve as a raw material base for elaboration of more complex semi-synthetic chemical compounds. This paper mainly deals with the collection of plants, the extraction of active compounds from the various parts of plants, qualitative and quantitative analysis of the phytochemicals.",
"title": ""
}
] | scidocsrr |
8ad9b655796db2d971c252034babffb7 | Table Detection in Noisy Off-line Handwritten Documents | [
{
"docid": "823c0e181286d917a610f90d1c9db0c3",
"text": "Table characteristics vary widely. Consequently, a great variety of computational approaches have been applied to table recognition. In this survey, the table recognition literature is presented as an interaction of table models, observations, transformations and inferences. A table model defines the physical and logical structure of tables; the model is used to detect tables, and to analyze and decompose the detected tables. Observations perform feature measurements and data lookup, transformations alter or restructure data, and inferences generate and test hypotheses. This presentation clarifies the decisions that are made by a table recognizer, and the assumptions and inferencing techniques that underlie these decisions.",
"title": ""
},
{
"docid": "0343f1a0be08ff53e148ef2eb22aaf14",
"text": "Tables are a ubiquitous form of communication. While everyone seems to know what a table is, a precise, analytical definition of “tabularity” remains elusive because some bureaucratic forms, multicolumn text layouts, and schematic drawings share many characteristics of tables. There are significant differences between typeset tables, electronic files designed for display of tables, and tables in symbolic form intended for information retrieval. Most past research has addressed the extraction of low-level geometric information from raster images of tables scanned from printed documents, although there is growing interest in the processing of tables in electronic form as well. Recent research on table composition and table analysis has improved our understanding of the distinction between the logical and physical structures of tables, and has led to improved formalisms for modeling tables. This review, which is structured in terms of generalized paradigms for table processing, indicates that progress on half-a-dozen specific research issues would open the door to using existing paper and electronic tables for database update, tabular browsing, structured information retrieval through graphical and audio interfaces, multimedia table editing, and platform-independent display.",
"title": ""
}
] | [
{
"docid": "b215d3604e19c7023049c082b10d7aac",
"text": "In this paper, we discuss how we can extend probabilistic topic models to analyze the relationship graph of popular social-network data, so that we can group or label the edges and nodes in the graph based on their topic similarity. In particular, we first apply the well-known Latent Dirichlet Allocation (LDA) model and its existing variants to the graph-labeling task and argue that the existing models do not handle popular nodes (nodes with many incoming edges) in the graph very well. We then propose possible extensions to this model to deal with popular nodes. Our experiments show that the proposed extensions are very effective in labeling popular nodes, showing significant improvements over the existing methods. Our proposed methods can be used for providing, for instance, more relevant friend recommendations within a social network.",
"title": ""
},
{
"docid": "20cfcfde25db033db8d54fe7ae6fcca1",
"text": "We present the first study that evaluates both speaker and listener identification for direct speech in literary texts. Our approach consists of two steps: identification of speakers and listeners near the quotes, and dialogue chain segmentation. Evaluation results show that this approach outperforms a rule-based approach that is stateof-the-art on a corpus of literary texts.",
"title": ""
},
{
"docid": "e2c239bed763d13117e943ef988827f1",
"text": "This paper presents a comprehensive review of 196 studies which employ operational research (O.R.) and artificial intelligence (A.I.) techniques in the assessment of bank performance. Several key issues in the literature are highlighted. The paper also points to a number of directions for future research. We first discuss numerous applications of data envelopment analysis which is the most widely applied O.R. technique in the field. Then we discuss applications of other techniques such as neural networks, support vector machines, and multicriteria decision aid that have also been used in recent years, in bank failure prediction studies and the assessment of bank creditworthiness and underperformance. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8b34b86cb1ce892a496740bfbff0f9c5",
"text": "Common subexpression elimination is commonly employed to reduce the number of operations in DSP algorithms after decomposing constant multiplications into shifts and additions. Conventional optimization techniques for finding common subexpressions can optimize constant multiplications with only a single variable at a time, and hence cannot fully optimize the computations with multiple variables found in matrix form of linear systems like DCT, DFT etc. We transform these computations such that all common subexpressions involving any number of variables can be detected. We then present heuristic algorithms to select the best set of common subexpressions. Experimental results show the superiority of our technique over conventional techniques for common subexpression elimination.",
"title": ""
},
{
"docid": "c61efe1758f6599e5cc069185bb02d48",
"text": "Modeling the face aging process is a challenging task due to large and non-linear variations present in different stages of face development. This paper presents a deep model approach for face age progression that can efficiently capture the non-linear aging process and automatically synthesize a series of age-progressed faces in various age ranges. In this approach, we first decompose the long-term age progress into a sequence of short-term changes and model it as a face sequence. The Temporal Deep Restricted Boltzmann Machines based age progression model together with the prototype faces are then constructed to learn the aging transformation between faces in the sequence. In addition, to enhance the wrinkles of faces in the later age ranges, the wrinkle models are further constructed using Restricted Boltzmann Machines to capture their variations in different facial regions. The geometry constraints are also taken into account in the last step for more consistent age-progressed results. The proposed approach is evaluated using various face aging databases, i.e. FGNET, Cross-Age Celebrity Dataset (CACD) and MORPH, and our collected large-scale aging database named AginG Faces in the Wild (AGFW). In addition, when ground-truth age is not available for input image, our proposed system is able to automatically estimate the age of the input face before aging process is employed.",
"title": ""
},
{
"docid": "f3727bfc3965bcb49d8897f144ac13a3",
"text": "Presenteeism refers to attending work while ill. Although it is a subject of intense interest to scholars in occupational medicine, relatively few organizational scholars are familiar with the concept. This article traces the development of interest in presenteeism, considers its various conceptualizations, and explains how presenteeism is typically measured. Organizational and occupational correlates of attending work when ill are reviewed, as are medical correlates of resulting productivity loss. It is argued that presenteeism has important implications for organizational theory and practice, and a research agenda for organizational scholars is presented. Copyright # 2009 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "b14010454fe4b9f9712c13cbf9a5e23b",
"text": "In this paper we propose an approach to Part of Speech (PoS) tagging using a combination of Hidden Markov Model and error driven learning. For the NLPAI joint task, we also implement a chunker using Conditional Random Fields (CRFs). The results for the PoS tagging and chunking task are separately reported along with the results of the joint task.",
"title": ""
},
{
"docid": "8bf0d60fcda4aea9f905b8df6ddc5d65",
"text": "We present kinematics, actuation, detailed design, characterization results and initial user evaluations of AssistOn-Knee, a novel self-aligning active exoskeleton for robot-assisted knee rehabilitation. AssistOn-Knee can, not only assist flexion/extension movements of the knee joint but also accommodate its translational movements in the sagittal plane. Automatically aligning its joint axes, AssistOn-Knee enables an ideal match between human knee axis and the exoskeleton axis, guaranteeing ergonomy and comfort throughout the therapy. Self-aligning feature significantly shortens the setup time required to attach the patient to the exoskeleton, allowing more effective time spent on exercises. The proposed exoskeleton actively controls the rotational degree of freedom of the knee through a Bowden cable-driven series elastic actuator, while the translational movements of the knee joints are passively accommodated through use of a 3 degrees of freedom planar parallel mechanism. AssistOn-Knee possesses a lightweight and compact design with significantly low apparent inertia, thanks to its Bowden cable based transmission that allows remote location of the actuator and reduction unit. Furthermore, thanks to its series-elastic actuation, AssistOn-Knee enables high-fidelity force control and active backdrive-ability within its control bandwidth, while featuring passive elasticity for excitations above this bandwidth, ensuring safety and robustness throughout the whole frequency spectrum.",
"title": ""
},
{
"docid": "177f95dc300186f519bd3ac48081a6e0",
"text": "TAI's multi-sensor fusion technology is accelerating the development of accurate MEMS sensor-based inertial navigation in situations where GPS does not operate reliably (GPS-denied environments). TAI has demonstrated that one inertial device per axis is not sufficient to produce low drift errors for long term accuracy needed for GPS-denied applications. TAI's technology uses arrays of off-the-shelf MEMS inertial sensors to create an inertial measurement unit (IMU) suitable for inertial navigation systems (INS) that require only occasional GPS updates. Compared to fiber optics gyros, properly combined MEMS gyro arrays are lower cost, fit into smaller volume, use less power and have equal or better performance. The patents TAI holds address this development for both gyro and accelerometer arrays. Existing inertial measurement units based on such array combinations, the backbone of TAI's inertial navigation system (INS) design, have demonstrated approximately 100 times lower sensor drift error to support very accurate angular rates, very accurate position measurements, and very low angle error for long durations. TAI's newest, fourth generation, product occupies small volume, has low weight, and consumes little power. The complete assembly can be potted in a protective sheath to form a rugged standalone product. An external exoskeleton case protects the electronic assembly for munitions and UAV applications. TAI's IMU/INS will provide the user with accurate real-time navigation information in difficult situations where GPS is not reliable. The key to such accurate performance is to achieve low sensor drift errors. The INS responds to quick movements without introducing delays while sharply reducing sensor drift errors that result in significant navigation errors. Discussed in the paper are physical characteristics of the IMU, an overview of the system design, TAI's systematic approach to drift reduction and some early results of applying a sigma point Kalman filter to sustain low gyro drift.",
"title": ""
},
{
"docid": "ba2a9451fa1f794c7a819acaa9bc5d82",
"text": "In this paper we briefly address DLR’s (German Aerospace Center) background in space robotics by hand of corresponding milestone projects including systems on the International Space Station. We then discuss the key technologies needed for the development of an artificial “robonaut” generation with mechatronic ultra-lightweight arms and multifingered hands. The third arm generation is nearly finished now, approaching the limits of what is technologically achievable today with respect to light weight and power losses. In a similar way DLR’s second generation of artificial four-fingered hands was a big step towards higher reliability, manipulability and overall",
"title": ""
},
{
"docid": "e95336e305ac921c01198554da91dcdb",
"text": "We consider the problem of staffing call-centers with multip le customer classes and agent types operating under quality-of-service (QoS) constraints and demand rate uncertainty. We introduce a formulation of the staffing problem that requires that the Q oS constraints are met with high probability with respect to the uncertainty in the demand ra te. We contrast this chance-constrained formulation with the average-performance constraints tha t have been used so far in the literature. We then propose a two-step solution for the staffing problem u nder chance constraints. In the first step, we introduce a Random Static Planning Problem (RSPP) a nd discuss how it can be solved using two different methods. The RSPP provides us with a first -order (or fluid) approximation for the true optimal staffing levels and a staffing frontier. In the second step, we solve a finite number of staffing problems with known arrival rates–the arrival rate s on the optimal staffing frontier. Hence, our formulation and solution approach has the important pro perty that it translates the problem with uncertain demand rates to one with known arrival rates. The o utput of our procedure is a solution that is feasible with respect to the chance constraint and ne arly optimal for large call centers.",
"title": ""
},
{
"docid": "15dfa65d40eb6cd60c3df952a7b864c4",
"text": "The lack of theoretical progress in the IS field may be surprising. From an empirical viewpoint, the IS field resembles other management fields. Specifically, as fields of inquiry develop, their theories are often placed on a hierarchy from ad hoc classification systems (in which categories are used to summarize empirical observations), to taxonomies (in which the relationships between the categories can be described), to conceptual frameworks (in which propositions summarize explanations and predictions), to theoretical systems (in which laws are contained within axiomatic or formal theories) (Parsons and Shils 1962). In its short history, IS research has developed from classification systems to conceptual frameworks. In the 1970s, it was considered pre-paradigmatic. Today, it is approaching the level of development in empirical research of other management fields, like organizational behavior (Webster 2001). However, unlike other fields that have journals devoted to review articles (e.g., the Academy of Management Review), we see few review articles in ISand hence the creation of MISQ Review as a device for accelerating development of the discipline.",
"title": ""
},
{
"docid": "9db83d9bb1acfa49e7546a8976893180",
"text": "Private query processing on encrypted databases allows users to obtain data from encrypted databases in such a way that the user’s sensitive data will be protected from exposure. Given an encrypted database, the users typically submit queries similar to the following examples: – How many employees in an organization make over $100,000? – What is the average age of factory workers suffering from leukemia? Answering the above questions requires one to search and then compute over the encrypted databases in sequence. In the case of privately processing queries with only one of these operations, many efficient solutions have been developed using a special-purpose encryption scheme (e.g., searchable encryption). In this paper, we are interested in efficiently processing queries that need to perform both operations on fully encrypted databases. One immediate solution is to use several special-purpose encryption schemes at the same time, but this approach is associated with a high computational cost for maintaining multiple encryption contexts. The other solution is to use a privacy homomorphism (or fully homomorphic encryption) scheme. However, no secure solutions have been developed that meet the efficiency requirements. In this work, we construct a unified framework so as to efficiently and privately process queries with “search” and “compute” operations. To this end, the first part of our work involves devising some underlying circuits as primitives for queries on encrypted data. Second, we apply two optimization techniques to improve the efficiency of the circuit primitives. One technique is to exploit SIMD techniques to accelerate their basic operations. In contrast to general SIMD approaches, our SIMD implementation can be applied even when one basic operation is executed. The other technique is to take a large integer ring (e.g., Z2t) as a message space instead of a binary field. Even for an integer of k bits with k ą t, addition can be performed with degree 1 circuits with lazy carry operations. As a result, search queries including a conjunctive or disjunctive query on encrypted databases of N tuples with μ-bit attributes require OpN logμq homomorphic operations with depth Oplogμq circuits. Search-and-compute queries, such as a conjunctive query with aggregate functions in the same conditions, are processed using OpμNq homomorphic operations with at most depth Oplogμ logNq circuits. Further, we can process search-and-compute queries using only OpN logμq homomorphic operations with depth Oplogμq circuits, even in the large domain. Finally, we present various experiments by varying the parameters, such as the query type and the number of tuples.",
"title": ""
},
{
"docid": "7c5f1b12f540c8320587ead7ed863ee5",
"text": "This paper studies the non-fragile mixed H∞ and passive synchronization problem for Markov jump neural networks. The randomly occurring controller gain fluctuation phenomenon is investigated for non-fragile strategy. Moreover, the mixed time-varying delays composed of discrete and distributed delays are considered. By employing stochastic stability theory, synchronization criteria are developed for the Markov jump neural networks. On the basis of the derived criteria, the non-fragile synchronization controller is designed. Finally, an illustrative example is presented to demonstrate the validity of the control approach.",
"title": ""
},
{
"docid": "1165be411612c7d6c09ec0408ffdeaad",
"text": "OBJECTIVES\nTo describe and compare 20 m shuttle run test (20mSRT) performance among children and youth across 50 countries; to explore broad socioeconomic indicators that correlate with 20mSRT performance in children and youth across countries and to evaluate the utility of the 20mSRT as an international population health indicator for children and youth.\n\n\nMETHODS\nA systematic review was undertaken to identify papers that explicitly reported descriptive 20mSRT (with 1-min stages) data on apparently healthy 9-17 year-olds. Descriptive data were standardised to running speed (km/h) at the last completed stage. Country-specific 20mSRT performance indices were calculated as population-weighted mean z-scores relative to all children of the same age and sex from all countries. Countries were categorised into developed and developing groups based on the Human Development Index, and a correlational analysis was performed to describe the association between country-specific performance indices and broad socioeconomic indicators using Spearman's rank correlation coefficient.\n\n\nRESULTS\nPerformance indices were calculated for 50 countries using collated data on 1 142 026 children and youth aged 9-17 years. The best performing countries were from Africa and Central-Northern Europe. Countries from South America were consistently among the worst performing countries. Country-specific income inequality (Gini index) was a strong negative correlate of the performance index across all 50 countries.\n\n\nCONCLUSIONS\nThe pattern of variability in the performance index broadly supports the theory of a physical activity transition and income inequality as the strongest structural determinant of health in children and youth. This simple and cost-effective assessment would be a powerful tool for international population health surveillance.",
"title": ""
},
{
"docid": "7539af35786fba888fa3a7cafa5db0b0",
"text": "Multi-view stereo algorithms typically rely on same-exposure images as inputs due to the brightness constancy assumption. While state-of-the-art depth results are excellent, they do not produce high-dynamic range textures required for high-quality view reconstruction. In this paper, we propose a technique that adapts multi-view stereo for different exposure inputs to simultaneously recover reliable dense depth and high dynamic range textures. In our technique, we use an exposure-invariant similarity statistic to establish correspondences, through which we robustly extract the camera radiometric response function and the image exposures. This enables us to then convert all images to radiance space and selectively use the radiance data for dense depth and high dynamic range texture recovery. We show results for synthetic and real scenes.",
"title": ""
},
{
"docid": "8b5d7965ac154da1266874027f0b10a0",
"text": "Matching pedestrians across disjoint camera views, known as person re-identification (re-id), is a challenging problem that is of importance to visual recognition and surveillance. Most existing methods exploit local regions within spatial manipulation to perform matching in local correspondence. However, they essentially extract fixed representations from pre-divided regions for each image and perform matching based on the extracted representation subsequently. For models in this pipeline, local finer patterns that are crucial to distinguish positive pairs from negative ones cannot be captured, and thus making them underperformed. In this paper, we propose a novel deep multiplicative integration gating function, which answers the question of what-and-where to match for effective person re-id. To address what to match, our deep network emphasizes common local patterns by learning joint representations in a multiplicative way. The network comprises two Convolutional Neural Networks (CNNs) to extract convolutional activations, and generates relevant descriptors for pedestrian matching. This thus, leads to flexible representations for pair-wise images. To address where to match, we combat the spatial misalignment by performing spatially recurrent pooling via a four-directional recurrent neural network to impose spatial depenEmail addresses: lin.wu@uq.edu.au (Lin Wu ), wangy@cse.unsw.edu.au (Yang Wang), xueli@itee.uq.edu.au (Xue Li), junbin.gao@sydney.edu.au (Junbin Gao) Preprint submitted to Elsevier 25·7·2017 ar X iv :1 70 7. 07 07 4v 1 [ cs .C V ] 2 1 Ju l 2 01 7 dency over all positions with respect to the entire image. The proposed network is designed to be end-to-end trainable to characterize local pairwise feature interactions in a spatially aligned manner. To demonstrate the superiority of our method, extensive experiments are conducted over three benchmark data sets: VIPeR, CUHK03 and Market-1501.",
"title": ""
},
{
"docid": "d6a585443f5829b556a1064b9b92113a",
"text": "The water quality monitoring system is designed for the need of environmental protection department in a particular area of the water quality requirements. The system is based on the Wireless Sensor Network (WSN). It consists of Wireless Water Quality Monitoring Network and Remote Data Center. The hardware platform use wireless microprocessor CC2430 as the core of the node. The sensor network is builted in accordance with Zigbee wireless transmission agreement. WSN Sample the water quality, and send the data to Internet with the help of the GPRS DTU which has a built-in TCP/IP protocol. Through the Internet, Remote Data Center gets the real-time water quality data, and then analysis, process and record the data. Environmental protection department can provide real-time guidance to those industry which depends on regional water quality conditions, like industrial, plant and aquaculture. The most important is that the work can be more efficient and less cost.",
"title": ""
},
{
"docid": "786a31d5c189c8376a08be6050ddbd9c",
"text": "In this article, we present a meta-analysis of research examining visibility of disability. In interrogating the issue of visibility and invisibility in the design of assistive technologies, we open a discussion about how perceptions surrounding disability can be probed through an examination of visibility and how these tensions do, and perhaps should, influence assistive technology design and research.",
"title": ""
},
{
"docid": "9a12ec03e4521a33a7e76c0c538b6b43",
"text": "Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.",
"title": ""
}
] | scidocsrr |
b5c64b6ec84258c7052997aaa8bd071f | DECISION BOUNDARY ANALYSIS OF ADVERSARIAL EXAMPLES | [
{
"docid": "8c0c2d5abd8b6e62f3184985e8e01d66",
"text": "Neural networks are known to be vulnerable to adversarial examples: inputs that are close to natural inputs but classified incorrectly. In order to better understand the space of adversarial examples, we survey ten recent proposals that are designed for detection and compare their efficacy. We show that all can be defeated by constructing new loss functions. We conclude that adversarial examples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not. Finally, we propose several simple guidelines for evaluating future proposed defenses.",
"title": ""
},
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
},
{
"docid": "0a8c009d1bccbaa078f95cc601010af3",
"text": "Deep neural networks (DNNs) have transformed several artificial intelligence research areas including computer vision, speech recognition, and natural language processing. However, recent studies demonstrated that DNNs are vulnerable to adversarial manipulations at testing time. Specifically, suppose we have a testing example, whose label can be correctly predicted by a DNN classifier. An attacker can add a small carefully crafted noise to the testing example such that the DNN classifier predicts an incorrect label, where the crafted testing example is called adversarial example. Such attacks are called evasion attacks. Evasion attacks are one of the biggest challenges for deploying DNNs in safety and security critical applications such as self-driving cars.\n In this work, we develop new DNNs that are robust to state-of-the-art evasion attacks. Our key observation is that adversarial examples are close to the classification boundary. Therefore, we propose region-based classification to be robust to adversarial examples. Specifically, for a benign/adversarial testing example, we ensemble information in a hypercube centered at the example to predict its label. In contrast, traditional classifiers are point-based classification, i.e., given a testing example, the classifier predicts its label based on the testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets demonstrate that our region-based classification can significantly mitigate evasion attacks without sacrificing classification accuracy on benign examples. Specifically, our region-based classification achieves the same classification accuracy on testing benign examples as point-based classification, but our region-based classification is significantly more robust than point-based classification to state-of-the-art evasion attacks.",
"title": ""
},
{
"docid": "b12bae586bc49a12cebf11cca49c0386",
"text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.",
"title": ""
}
] | [
{
"docid": "7cc41229d0368f702a4dde3ccf597604",
"text": "State Machines",
"title": ""
},
{
"docid": "59f3c511765c52702b9047a688256532",
"text": "Mobile robots are dependent upon a model of the environment for many of their basic functions. Locally accurate maps are critical to collision avoidance, while large-scale maps (accurate both metrically and topologically) are necessary for efficient route planning. Solutions to these problems have immediate and important applications to autonomous vehicles, precision surveying, and domestic robots. Building accurate maps can be cast as an optimization problem: find the map that is most probable given the set of observations of the environment. However, the problem rapidly becomes difficult when dealing with large maps or large numbers of observations. Sensor noise and non-linearities make the problem even more difficult— especially when using inexpensive (and therefore preferable) sensors. This thesis describes an optimization algorithm that can rapidly estimate the maximum likelihood map given a set of observations. The algorithm, which iteratively reduces map error by considering a single observation at a time, scales well to large environments with many observations. The approach is particularly robust to noise and non-linearities, quickly escaping local minima that trap current methods. Both batch and online versions of the algorithm are described. In order to build a map, however, a robot must first be able to recognize places that it has previously seen. Limitations in sensor processing algorithms, coupled with environmental ambiguity, make this difficult. Incorrect place recognitions can rapidly lead to divergence of the map. This thesis describes a place recognition algorithm that can robustly handle ambiguous data. We evaluate these algorithms on a number of challenging datasets and provide quantitative comparisons to other state-of-the-art methods, illustrating the advantages of our methods.",
"title": ""
},
{
"docid": "2746d538694db54381639e5e5acdb4ca",
"text": "In the present research, the aqueous stability of leuprolide acetate (LA) in phosphate buffered saline (PBS) medium was studied (pH = 2.0-7.4). For this purpose, the effect of temperature, dissolved oxygen and pH on the stability of LA during 35 days was investigated. Results showed that the aqueous stability of LA was higher at low temperatures. Degassing of the PBS medium partially increased the stability of LA at 4 °C, while did not change at 37 °C. The degradation of LA was accelerated at lower pH values. In addition, complexes of LA with different portions of β-cyclodextrin (β-CD) were prepared through freeze-drying procedure and characterized by Fourier transform infrared (FTIR) and differential scanning calorimetry (DSC) analyses. Studying their aqueous stability at various pH values (2.0-7.4) showed LA/β-CD complexes exhibited higher stability when compared with LA at all pH values. The stability of complexes was also improved by increasing the portion of LA/β-CD up to 1/10.",
"title": ""
},
{
"docid": "27c56cabe2742fbe69154e63073e193e",
"text": "Developing a good model for oscillometric blood-pressure measurements is a hard task. This is mainly due to the fact that the systolic and diastolic pressures cannot be directly measured by noninvasive automatic oscillometric blood-pressure meters (NIBP) but need to be computed based on some kind of algorithm. This is in strong contrast with the classical Korotkoff method, where the diastolic and systolic blood pressures can be directly measured by a sphygmomanometer. Although an NIBP returns results similar to the Korotkoff method for patients with normal blood pressures, a big discrepancy exist between both methods for severe hyper- and hypotension. For these severe cases, a statistical model is needed to compensate or calibrate the oscillometric blood-pressure meters. Although different statistical models have been already studied, no immediate calibration method has been proposed. The reason is that the step from a model, describing the measurements, to a calibration, correcting the blood-pressure meters, is a rather large leap. In this paper, we study a “databased” Fourier series approach to model the oscillometric waveform and use the Windkessel model for the blood flow to correct the oscillometric blood-pressure meters. The method is validated on a measurement campaign consisting of healthy patients and patients suffering from either hyper- or hypotension.",
"title": ""
},
{
"docid": "c51e7c171de42ed19f69c6ccf893ec52",
"text": "The fibroblast growth factor signaling pathway (FGFR signaling) is an evolutionary conserved signaling cascade that regulates several basic biologic processes, including tissue development, angiogenesis, and tissue regeneration. Substantial evidence indicates that aberrant FGFR signaling is involved in the pathogenesis of cancer. Recent developments of deep sequencing technologies have allowed the discovery of frequent molecular alterations in components of FGFR signaling among several solid tumor types. Moreover, compelling preclinical models have demonstrated the oncogenic potential of these aberrations in driving tumor growth, promoting angiogenesis, and conferring resistance mechanisms to anticancer therapies. Recently, the field of FGFR targeting has exponentially progressed thanks to the development of novel agents inhibiting FGFs or FGFRs, which had manageable safety profiles in early-phase trials. Promising treatment efficacy has been observed in different types of malignancies, particularly in tumors harboring aberrant FGFR signaling, thus offering novel therapeutic opportunities in the era of precision medicine. The most exciting challenges now focus on selecting patients who are most likely to benefit from these agents, increasing the efficacy of therapies with the development of novel potent compounds and combination strategies, and overcoming toxicities associatedwith FGFR inhibitors. After examination of the basic and translational research studies that validated the oncogenic potential of aberrant FGFR signaling, this review focuses on recent data from clinical trials evaluating FGFR targeting therapies and discusses the challenges and perspectives for the development of these agents. Clin Cancer Res; 21(12); 2684–94. 2015 AACR. Disclosure of Potential Conflicts of Interest F. Andr e is a consultant/advisory board member for Novartis. J.-C. Soria is a consultant/advisory board member for AstraZeneca, Clovis Oncology, EOS, Johnson & Johnson, and Servier. No potential conflicts of interest were disclosed by the other authors. Editor's Disclosures The following editor(s) reported relevant financial relationships: S.E. Bates reports receiving a commercial research grant from Celgene",
"title": ""
},
{
"docid": "3bd2941e72695b3214247c8c7071410b",
"text": "The paper contributes to the emerging literature linking sustainability as a concept to problems researched in HRM literature. Sustainability is often equated with social responsibility. However, emphasizing mainly moral or ethical values neglects that sustainability can also be economically rational. This conceptual paper discusses how the notion of sustainability has developed and emerged in HRM literature. A typology of sustainability concepts in HRM is presented to advance theorizing in the field of Sustainable HRM. The concepts of paradox, duality, and dilemma are reviewed to contribute to understanding the emergence of sustainability in HRM. It is argued in this paper that sustainability can be applied as a concept to cope with the tensions of shortvs. long-term HRM and to make sense of paradoxes, dualities, and dilemmas. Furthermore, it is emphasized that the dualities cannot be reconciled when sustainability is interpreted in a way that leads to ignorance of one of the values or logics. Implications for further research and modest suggestions for managerial practice are derived.",
"title": ""
},
{
"docid": "863c806d29c15dd9b9160eae25316dfc",
"text": "This paper presents new structural statistical matrices which are gray level size zone matrix (SZM) texture descriptor variants. The SZM is based on the cooccurrences of size/intensity of each flat zone (connected pixels with the same gray level). The first improvement increases the information processed by merging multiple gray-level quantizations and reduces the required parameter numbers. New improved descriptors were especially designed for supervised cell texture classification. They are illustrated thanks to two different databases built from quantitative cell biology. The second alternative characterizes the DNA organization during the mitosis, according to zone intensities radial distribution. The third variant is a matrix structure generalization for the fibrous texture analysis, by changing the intensity/size pair into the length/orientation pair of each region.",
"title": ""
},
{
"docid": "e59f3f8e0deea8b4caa32b54049ad76b",
"text": "We present AD, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs, based on the alternating directions method of multipliers. Like other dual decomposition algorithms, AD has a modular architecture, where local subproblems are solved independently, and their solutions are gathered to compute a global update. The key characteristic of AD is that each local subproblem has a quadratic regularizer, leading to faster convergence, both theoretically and in practice. We provide closed-form solutions for these AD subproblems for binary pairwise factors and factors imposing first-order logic constraints. For arbitrary factors (large or combinatorial), we introduce an active set method which requires only an oracle for computing a local MAP configuration, making AD applicable to a wide range of problems. Experiments on synthetic and real-world problems show that AD compares favorably with the state-of-the-art.",
"title": ""
},
{
"docid": "c4feca5e27cfecdd2913e18cc7b7a21a",
"text": "one component of intelligent transportation systems, IV systems use sensing and intelligent algorithms to understand the vehicle’s immediate environment, either assisting the driver or fully controlling the vehicle. Following the success of information-oriented systems, IV systems will likely be the “next wave” for ITS, functioning at the control layer to enable the driver–vehicle “subsystem” to operate more effectively. This column provides a broad overview of applications and selected activities in this field. IV application areas",
"title": ""
},
{
"docid": "610ec093f08d62548925918d6e64b923",
"text": "Word embeddings encode semantic meanings of words into low-dimension word vectors. In most word embeddings, one cannot interpret the meanings of specific dimensions of those word vectors. Nonnegative matrix factorization (NMF) has been proposed to learn interpretable word embeddings via non-negative constraints. However, NMF methods suffer from scale and memory issue because they have to maintain a global matrix for learning. To alleviate this challenge, we propose online learning of interpretable word embeddings from streaming text data. Experiments show that our model consistently outperforms the state-of-the-art word embedding methods in both representation ability and interpretability. The source code of this paper can be obtained from http: //github.com/skTim/OIWE.",
"title": ""
},
{
"docid": "669b4b1574c22a0c18dd1dc107bc54a1",
"text": "T lymphocytes respond to foreign antigens both by producing protein effector molecules known as lymphokines and by multiplying. Complete activation requires two signaling events, one through the antigen-specific receptor and one through the receptor for a costimulatory molecule. In the absence of the latter signal, the T cell makes only a partial response and, more importantly, enters an unresponsive state known as clonal anergy in which the T cell is incapable of producing its own growth hormone, interleukin-2, on restimulation. Our current understanding at the molecular level of this modulatory process and its relevance to T cell tolerance are reviewed.",
"title": ""
},
{
"docid": "ae3a54128bb29272e5cb3552236b6f12",
"text": "Traditionally, human facial expressions have been studied using either 2D static images or 2D video sequences. The 2D-based analysis is incapable of handing large pose variations. Although 3D modeling techniques have been extensively used for 3D face recognition and 3D face animation, barely any research on 3D facial expression recognition using 3D range data has been reported. A primary factor for preventing such research is the lack of a publicly available 3D facial expression database. In this paper, we present a newly developed 3D facial expression database, which includes both prototypical 3D facial expression shapes and 2D facial textures of 2,500 models from 100 subjects. This is the first attempt at making a 3D facial expression database available for the research community, with the ultimate goal of fostering the research on affective computing and increasing the general understanding of facial behavior and the fine 3D structure inherent in human facial expressions. The new database can be a valuable resource for algorithm assessment, comparison and evaluation",
"title": ""
},
{
"docid": "a1a4b028fba02904333140e6791709bb",
"text": "Cross-site scripting (also referred to as XSS) is a vulnerability that allows an attacker to send malicious code (usually in the form of JavaScript) to another user. XSS is one of the top 10 vulnerabilities on Web application. While a traditional cross-site scripting vulnerability exploits server-side codes, DOM-based XSS is a type of vulnerability which affects the script code being executed in the clients browser. DOM-based XSS vulnerabilities are much harder to be detected than classic XSS vulnerabilities because they reside on the script codes from Web sites. An automated scanner needs to be able to execute the script code without errors and to monitor the execution of this code to detect such vulnerabilities. In this paper, we introduce a distributed scanning tool for crawling modern Web applications on a large scale and detecting, validating DOMbased XSS vulnerabilities. Very few Web vulnerability scanners can really accomplish this.",
"title": ""
},
{
"docid": "8e648261dc529f8e28ce3b2a40d9f0b0",
"text": "C 34 35 36 37 38 39 40 41 42 43 44 Article history: Received 21 July 2006 Received in revised form 25 June 2007 Accepted 27 July 2007 Available online xxxx",
"title": ""
},
{
"docid": "1f972cc136f47288888657e84464412e",
"text": "This paper evaluates the impact of machine translation on the software localization process and the daily work of professional translators when SMT is applied to low-resourced languages with rich morphology. Translation from English into six low-resourced languages (Czech, Estonian, Hungarian, Latvian, Lithuanian and Polish) from different language groups are examined. Quality, usability and applicability of SMT for professional translation were evaluated. The building of domain and project tailored SMT systems for localization purposes was evaluated in two setups. The results of the first evaluation were used to improve SMT systems and MT platform. The second evaluation analysed a more complex situation considering tag translation and its effects on the translator’s productivity.",
"title": ""
},
{
"docid": "ffcd59b9cf48f61ad0278effa6c167dd",
"text": "The first of this two-part series on critical illness in pregnancy dealt with obstetric disorders. In Part II, medical conditions that commonly affect pregnant women or worsen during pregnancy are discussed. ARDS occurs more frequently in pregnancy. Strategies commonly used in nonpregnant patients, including permissive hypercapnia, limits for plateau pressure, and prone positioning, may not be acceptable, especially in late pregnancy. Genital tract infections unique to pregnancy include chorioamnionitis, group A streptococcal infection causing toxic shock syndrome, and polymicrobial infection with streptococci, staphylococci, and Clostridium perfringens causing necrotizing vulvitis or fasciitis. Pregnancy predisposes to VTE; D-dimer levels have low specificity in pregnancy. A ventilation-perfusion scan is preferred over CT pulmonary angiography in some situations to reduce radiation to the mother's breasts. Low-molecular-weight or unfractionated heparins form the mainstay of treatment; vitamin K antagonists, oral factor Xa inhibitors, and direct thrombin inhibitors are not recommended in pregnancy. The physiologic hyperdynamic circulation in pregnancy worsens many cardiovascular disorders. It increases risk of pulmonary edema or arrhythmias in mitral stenosis, heart failure in pulmonary hypertension or aortic stenosis, aortic dissection in Marfan syndrome, or valve thrombosis in mechanical heart valves. Common neurologic problems in pregnancy include seizures, altered mental status, visual symptoms, and strokes. Other common conditions discussed are aspiration of gastric contents, OSA, thyroid disorders, diabetic ketoacidosis, and cardiopulmonary arrest in pregnancy. Studies confined to pregnant women are available for only a few of these conditions. We have, therefore, reviewed pregnancy-specific adjustments in the management of these disorders.",
"title": ""
},
{
"docid": "f84011e3b4c8b1e80d4e79dee3ccad53",
"text": "What is the future of fashion? Tackling this question from a data-driven vision perspective, we propose to forecast visual style trends before they occur. We introduce the first approach to predict the future popularity of styles discovered from fashion images in an unsupervised manner. Using these styles as a basis, we train a forecasting model to represent their trends over time. The resulting model can hypothesize new mixtures of styles that will become popular in the future, discover style dynamics (trendy vs. classic), and name the key visual attributes that will dominate tomorrow’s fashion. We demonstrate our idea applied to three datasets encapsulating 80,000 fashion products sold across six years on Amazon. Results indicate that fashion forecasting benefits greatly from visual analysis, much more than textual or meta-data cues surrounding products.",
"title": ""
},
{
"docid": "0ee435f59529fa0e1c5a01d3488aa6ed",
"text": "The additivity of wavelet subband quantization distortions was investigated in an unmasked detection task and in masked detection and discrimination tasks. Contrast thresholds were measured for both simple targets (artifacts induced by uniform quantization of individual discrete wavelet transform subbands) and compound targets (artifacts induced by uniform quantization of pairs of discrete wavelet transform subbands) in the presence of no mask and eight different natural image maskers. The results were used to assess summation between wavelet subband quantization distortions on orientation and spatial-frequency dimensions. In the unmasked detection experiment, subthreshold quantization distortions pooled in a non-linear fashion and the amount of summation agreed with those of previous summation-atthreshold experiments (ß=2.43; relative sensitivity=1.33). In the masked detection and discrimination experiments, suprathreshold quantization distortions pooled in a linear fashion. Summation increased as the distortions became increasingly suprathreshold but quickly settled to near-linear values. Summation on the spatial-frequency dimension was greater than summation on the orientation dimension for all suprathreshold contrasts. A high degree of uncertainty imposed by the natural image maskers precludes quantifying an absolute measure of summation.",
"title": ""
},
{
"docid": "268ab0ae541eb2555a464af0e8ab58c5",
"text": "Melanocytes are melanin-producing cells found in skin, hair follicles, eyes, inner ear, bones, heart and brain of humans. They arise from pluripotent neural crest cells and differentiate in response to a complex network of interacting regulatory pathways. Melanins are pigment molecules that are endogenously synthesized by melanocytes. The light absorption of melanin in skin and hair leads to photoreceptor shielding, thermoregulation, photoprotection, camouflage and display coloring. Melanins are also powerful cation chelators and may act as free radical sinks. Melanin formation is a product of complex biochemical events that starts from amino acid tyrosine and its metabolite, dopa. The types and amounts of melanin produced by melanocytes are determined genetically and are influenced by a variety of extrinsic and intrinsic factors such as hormonal changes, inflammation, age and exposure to UV light. These stimuli affect the different pathways in melanogenesis. In this review we will discuss the regulatory mechanisms involved in melanogenesis and explain how intrinsic and extrinsic factors regulate melanin production. We will also explain the regulatory roles of different proteins involved in melanogenesis.",
"title": ""
},
{
"docid": "dd05688335b4240bbc40919870e30f39",
"text": "In this tool report, we present an overview of the Watson system, a Semantic Web search engine providing various functionalities not only to find and locate ontologies and semantic data online, but also to explore the content of these semantic documents. Beyond the simple facade of a search engine for the Semantic Web, we show that the availability of such a component brings new possibilities in terms of developing semantic applications that exploit the content of the Semantic Web. Indeed, Watson provides a set of APIs containing high level functions for finding, exploring and querying semantic data and ontologies that have been published online. Thanks to these APIs, new applications have emerged that connect activities such as ontology construction, matching, sense disambiguation and question answering to the Semantic Web, developed by our group and others. In addition, we also describe Watson as a unprecedented research platform for the study the Semantic Web, and of formalised knowledge in general.",
"title": ""
}
] | scidocsrr |
09ada2c726f12a28265f15a68d1a9f85 | Spatiotemporal social media analytics for abnormal event detection and examination using seasonal-trend decomposition | [
{
"docid": "d3d471b6b377d8958886a2f6c89d5061",
"text": "In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.",
"title": ""
}
] | [
{
"docid": "b389cf1f4274b250039414101cf0cc98",
"text": "We present a framework for analyzing the structure of digital media streams. Though our methods work for video, text, and audio, we concentrate on detecting the structure of digital music files. In the first step, spectral data is used to construct a similarity matrix calculated from inter-frame spectral similarity. The digital audio can be robustly segmented by correlating a kernel along the diagonal of the similarity matrix. Once segmented, spectral statistics of each segment are computed. In the second step, segments are clustered based on the selfsimilarity of their statistics. This reveals the structure of the digital music in a set of segment boundaries and labels. Finally, the music can be summarized by selecting clusters with repeated segments throughout the piece. The summaries can be customized for various applications based on the structure of the original music.",
"title": ""
},
{
"docid": "425ee6d9de68116692d1e449f7be639b",
"text": "Copy-move forgery is one of the most common types of image forgeries, where a region from one part of an image is copied and pasted onto another part, thereby concealing the image content in the latter region. Keypoint based copy-move forgery detection approaches extract image feature points and use local visual features, rather than image blocks, to identify duplicated regions. Keypoint based approaches exhibit remarkable performance with respect to computational cost, memory requirement, and robustness. But unfortunately, they usually do not work well if smooth background areas are used to hide small objects, as image keypoints cannot be extracted effectively from those areas. It is a challenging work to design a keypoint-based method for detecting forgeries involving small smooth regions. In this paper, we propose a new keypoint-based copy-move forgery detection for small smooth regions. Firstly, the original tampered image is segmented into nonoverlapping and irregular superpixels, and the superpixels are classified into smooth, texture and strong texture based on local information entropy. Secondly, the stable image keypoints are extracted from each superpixel, including smooth, texture and strong texture ones, by utilizing the superpixel content based adaptive feature points detector. Thirdly, the local visual features, namely exponent moments magnitudes, are constructed for each image keypoint, and the best bin first and reversed generalized 2 nearest-neighbor algorithm are utilized to find rapidly the matching image keypoints. Finally, the falsely matched image keypoints are removed by customizing the random sample consensus, and the duplicated regions are localized by using zero mean normalized cross-correlation measure. Extensive experimental results show that the newly proposed scheme can achieve much better detection results for copy-move forgery images under various challenging conditions, such as geometric transforms, JPEG compression, and additive white Gaussian noise, compared with the existing state-of-the-art copy-move forgery detection methods.",
"title": ""
},
{
"docid": "5706ae68d5e2b56679e0c89361fcc8b8",
"text": "Quantum computers promise to exceed the computational efficiency of ordinary classical machines because quantum algorithms allow the execution of certain tasks in fewer steps. But practical implementation of these machines poses a formidable challenge. Here I present a scheme for implementing a quantum-mechanical computer. Information is encoded onto the nuclear spins of donor atoms in doped silicon electronic devices. Logical operations on individual spins are performed using externally applied electric fields, and spin measurements are made using currents of spin-polarized electrons. The realization of such a computer is dependent on future refinements of conventional silicon electronics.",
"title": ""
},
{
"docid": "5562bb6fdc8864a23e7ec7992c7bb023",
"text": "Bacteria are known to communicate primarily via secreted extracellular factors. Here we identify a previously uncharacterized type of bacterial communication mediated by nanotubes that bridge neighboring cells. Using Bacillus subtilis as a model organism, we visualized transfer of cytoplasmic fluorescent molecules between adjacent cells. Additionally, by coculturing strains harboring different antibiotic resistance genes, we demonstrated that molecular exchange enables cells to transiently acquire nonhereditary resistance. Furthermore, nonconjugative plasmids could be transferred from one cell to another, thereby conferring hereditary features to recipient cells. Electron microscopy revealed the existence of variously sized tubular extensions bridging neighboring cells, serving as a route for exchange of intracellular molecules. These nanotubes also formed in an interspecies manner, between B. subtilis and Staphylococcus aureus, and even between B. subtilis and the evolutionary distant bacterium Escherichia coli. We propose that nanotubes represent a major form of bacterial communication in nature, providing a network for exchange of cellular molecules within and between species.",
"title": ""
},
{
"docid": "2ae773f548c1727a53a7eb43550d8063",
"text": "Today's Internet hosts are threatened by large-scale distributed denial-of-service (DDoS) attacks. The path identification (Pi) DDoS defense scheme has recently been proposed as a deterministic packet marking scheme that allows a DDoS victim to filter out attack packets on a per packet basis with high accuracy after only a few attack packets are received (Yaar , 2003). In this paper, we propose the StackPi marking, a new packet marking scheme based on Pi, and new filtering mechanisms. The StackPi marking scheme consists of two new marking methods that substantially improve Pi's incremental deployment performance: Stack-based marking and write-ahead marking. Our scheme almost completely eliminates the effect of a few legacy routers on a path, and performs 2-4 times better than the original Pi scheme in a sparse deployment of Pi-enabled routers. For the filtering mechanism, we derive an optimal threshold strategy for filtering with the Pi marking. We also develop a new filter, the PiIP filter, which can be used to detect Internet protocol (IP) spoofing attacks with just a single attack packet. Finally, we discuss in detail StackPi's compatibility with IP fragmentation, applicability in an IPv6 environment, and several other important issues relating to potential deployment of StackPi",
"title": ""
},
{
"docid": "0dd9fc4317dc99a2ca55a822cfc5c36e",
"text": "Recently, research has shown that it is possible to spoof a variety of fingerprint scanners using some simple techniques with molds made from plastic, clay, Play-Doh, silicone or gelatin materials. To protect against spoofing, methods of liveness detection measure physiological signs of life from fingerprints ensuring only live fingers are captured for enrollment or authentication. In this paper, a new liveness detection method is proposed which is based on noise analysis along the valleys in the ridge-valley structure of fingerprint images. Unlike live fingers which have a clear ridge-valley structure, artificial fingers have a distinct noise distribution due to the material’s properties when placed on a fingerprint scanner. Statistical features are extracted in multiresolution scales using wavelet decomposition technique. Based on these features, liveness separation (live/non-live) is performed using classification trees and neural networks. We test this method on the dataset which contains about 58 live, 80 spoof (50 made from Play-Doh and 30 made from gelatin), and 25 cadaver subjects for 3 different scanners. Also, we test this method on a second dataset which contains 28 live and 28 spoof (made from silicone) subjects. Results show that we can get approximately 90.9-100% classification of spoof and live fingerprints. The proposed liveness detection method is purely software based and application of this method can provide anti-spoofing protection for fingerprint scanners.",
"title": ""
},
{
"docid": "d0bacaa267599486356c175ca5419ede",
"text": "As P4 and its associated compilers move beyond relative immaturity, there is a need for common evaluation criteria. In this paper, we propose Whippersnapper, a set of benchmarks for P4. Rather than simply selecting a set of representative data-plane programs, the benchmark is designed from first principles, identifying and exploring key features and metrics. We believe the benchmark will not only provide a vehicle for comparing implementations and designs, but will also generate discussion within the larger community about the requirements for data-plane languages.",
"title": ""
},
{
"docid": "307dac4f0cc964a539160780abb1c123",
"text": "One of the main current applications of intelligent systems is recommender systems (RS). RS can help users to find relevant items in huge information spaces in a personalized way. Several techniques have been investigated for the development of RS. One of them is evolutionary computational (EC) techniques, which is an emerging trend with various application areas. The increasing interest in using EC for web personalization, information retrieval and RS fostered the publication of survey papers on the subject. However, these surveys have analyzed only a small number of publications, around ten. This study provides a comprehensive review of more than 65 research publications focusing on five aspects we consider relevant for such: the recommendation technique used, the datasets and the evaluation methods adopted in their experimental parts, the baselines employed in the experimental comparison of proposed approaches and the reproducibility of the reported experiments. At the end of this review, we discuss negative and positive aspects of these papers, as well as point out opportunities, challenges and possible future research directions. To the best of our knowledge, this review is the most comprehensive review of various approaches using EC in RS. Thus, we believe this review will be a relevant material for researchers interested in EC and RS.",
"title": ""
},
{
"docid": "f1bd28aba519845b3a6ea8ef92695e79",
"text": "Web 2.0 communities are a quite recent phenomenon which involve large numbers of users and where communication between members is carried out in real time. Despite of those good characteristics, there is still a necessity of developing tools to help users to reach decisions with a high level of consensus in those new virtual environments. In this contribution a new consensus reaching model is presented which uses linguistic preferences and is designed to minimize the main problems that this kind of organization",
"title": ""
},
{
"docid": "42c0f8504f26d46a4cc92d3c19eb900d",
"text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.",
"title": ""
},
{
"docid": "1b347401820c826db444cc3580bde210",
"text": "Utilization of Natural Fibers in Plastic Composites: Problems and Opportunities Roger M. Rowell, Anand R, Sanadi, Daniel F. Caulfield and Rodney E. Jacobson Forest Products Laboratory, ESDA, One Gifford Pinchot Drive, Madison, WI 53705 Department of Forestry, 1630 Linden Drive, University of Wisconsin, WI 53706 recycled. Results suggest that agro-based fibers are a viable alternative to inorganic/material based reinforcing fibers in commodity fiber-thermoplastic composite materials as long as the right processing conditions are used and for applications where higher water absorption may be so critical. These renewable fibers hav low densities and high specific properties and their non-abrasive nature permits a high volume of filling in the composite. Kenaf fivers, for example, have excellent specific properties and have potential to be outstanding reinforcing fillers in plastics. In our experiments, several types of natural fibers were blended with polyprolylene(PP) and then injection molded, with the fiber weight fractions varying to 60%. A compatibilizer or a coupling agent was used to improve the interaction and adhesion between the non-polar matrix and the polar lignocellulosic fibers. The specific tensile and flexural moduli of a 50% by weight (39% by volume) of kenaf-PP composites compares favorably with 40% by weight of glass fiber (19% by volume)-PP injection molded composites. Furthermore, prelimimary results sugget that natural fiber-PP composites can be regrounded and",
"title": ""
},
{
"docid": "198ad1ba78ac0aa315dac6f5730b4f88",
"text": "Life history theory posits that behavioral adaptation to various environmental (ecological and/or social) conditions encountered during childhood is regulated by a wide variety of different traits resulting in various behavioral strategies. Unpredictable and harsh conditions tend to produce fast life history strategies, characterized by early maturation, a higher number of sexual partners to whom one is less attached, and less parenting of offspring. Unpredictability and harshness not only affects dispositional social and emotional functioning, but may also promote the development of personality traits linked to higher rates of instability in social relationships or more self-interested behavior. Similarly, detrimental childhood experiences, such as poor parental care or high parent-child conflict, affect personality development and may create a more distrustful, malicious interpersonal style. The aim of this brief review is to survey and summarize findings on the impact of negative early-life experiences on the development of personality and fast life history strategies. By demonstrating that there are parallels in adaptations to adversity in these two domains, we hope to lend weight to current and future attempts to provide a comprehensive insight of personality traits and functions at the ultimate and proximate levels.",
"title": ""
},
{
"docid": "e5f4b8d4e02f68c90fe4b18dfed2719e",
"text": "The evolution of modern electronic devices is outpacing the scalability and effectiveness of the tools used to analyze digital evidence recovered from them. Indeed, current digital forensic techniques and tools are unable to handle large datasets in an efficient manner. As a result, the time and effort required to conduct digital forensic investigations are increasing. This paper describes a promising digital forensic visualization framework that displays digital evidence in a simple and intuitive manner, enhancing decision making and facilitating the explanation of phenomena in evidentiary data.",
"title": ""
},
{
"docid": "a692778b7f619de5ad4bc3b2d627c265",
"text": "Many students are being left behind by an educational system that some people believe is in crisis. Improving educational outcomes will require efforts on many fronts, but a central premise of this monograph is that one part of a solution involves helping students to better regulate their learning through the use of effective learning techniques. Fortunately, cognitive and educational psychologists have been developing and evaluating easy-to-use learning techniques that could help students achieve their learning goals. In this monograph, we discuss 10 learning techniques in detail and offer recommendations about their relative utility. We selected techniques that were expected to be relatively easy to use and hence could be adopted by many students. Also, some techniques (e.g., highlighting and rereading) were selected because students report relying heavily on them, which makes it especially important to examine how well they work. The techniques include elaborative interrogation, self-explanation, summarization, highlighting (or underlining), the keyword mnemonic, imagery use for text learning, rereading, practice testing, distributed practice, and interleaved practice. To offer recommendations about the relative utility of these techniques, we evaluated whether their benefits generalize across four categories of variables: learning conditions, student characteristics, materials, and criterion tasks. Learning conditions include aspects of the learning environment in which the technique is implemented, such as whether a student studies alone or with a group. Student characteristics include variables such as age, ability, and level of prior knowledge. Materials vary from simple concepts to mathematical problems to complicated science texts. Criterion tasks include different outcome measures that are relevant to student achievement, such as those tapping memory, problem solving, and comprehension. We attempted to provide thorough reviews for each technique, so this monograph is rather lengthy. However, we also wrote the monograph in a modular fashion, so it is easy to use. In particular, each review is divided into the following sections: General description of the technique and why it should work How general are the effects of this technique? 2a. Learning conditions 2b. Student characteristics 2c. Materials 2d. Criterion tasks Effects in representative educational contexts Issues for implementation Overall assessment The review for each technique can be read independently of the others, and particular variables of interest can be easily compared across techniques. To foreshadow our final recommendations, the techniques vary widely with respect to their generalizability and promise for improving student learning. Practice testing and distributed practice received high utility assessments because they benefit learners of different ages and abilities and have been shown to boost students' performance across many criterion tasks and even in educational contexts. Elaborative interrogation, self-explanation, and interleaved practice received moderate utility assessments. The benefits of these techniques do generalize across some variables, yet despite their promise, they fell short of a high utility assessment because the evidence for their efficacy is limited. For instance, elaborative interrogation and self-explanation have not been adequately evaluated in educational contexts, and the benefits of interleaving have just begun to be systematically explored, so the ultimate effectiveness of these techniques is currently unknown. Nevertheless, the techniques that received moderate-utility ratings show enough promise for us to recommend their use in appropriate situations, which we describe in detail within the review of each technique. Five techniques received a low utility assessment: summarization, highlighting, the keyword mnemonic, imagery use for text learning, and rereading. These techniques were rated as low utility for numerous reasons. Summarization and imagery use for text learning have been shown to help some students on some criterion tasks, yet the conditions under which these techniques produce benefits are limited, and much research is still needed to fully explore their overall effectiveness. The keyword mnemonic is difficult to implement in some contexts, and it appears to benefit students for a limited number of materials and for short retention intervals. Most students report rereading and highlighting, yet these techniques do not consistently boost students' performance, so other techniques should be used in their place (e.g., practice testing instead of rereading). Our hope is that this monograph will foster improvements in student learning, not only by showcasing which learning techniques are likely to have the most generalizable effects but also by encouraging researchers to continue investigating the most promising techniques. Accordingly, in our closing remarks, we discuss some issues for how these techniques could be implemented by teachers and students, and we highlight directions for future research.",
"title": ""
},
{
"docid": "f6266e5c4adb4fa24cc353dccccaf6db",
"text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.",
"title": ""
},
{
"docid": "79a02a35c02858a6510fc92b9eadde4e",
"text": "Distributed word representations have been demonstrated to be effective in capturing semantic and syntactic regularities. Unsupervised representation learning from large unlabeled corpora can learn similar representations for those words that present similar cooccurrence statistics. Besides local occurrence statistics, global topical information is also important knowledge that may help discriminate a word from another. In this paper, we incorporate category information of documents in the learning of word representations and to learn the proposed models in a documentwise manner. Our models outperform several state-of-the-art models in word analogy and word similarity tasks. Moreover, we evaluate the learned word vectors on sentiment analysis and text classification tasks, which shows the superiority of our learned word vectors. We also learn high-quality category embeddings that reflect topical meanings.",
"title": ""
},
{
"docid": "75b6168dd008fd1d30851d3cf24d7679",
"text": "We introduce Deep Linear Discriminant Analysis (DeepLDA) which learns linearly separable latent representations in an end-to-end fashion. Classic LDA extracts features which preserve class separability and is used for dimensionality reduction for many classification problems. The central idea of this paper is to put LDA on top of a deep neural network. This can be seen as a non-linear extension of classic LDA. Instead of maximizing the likelihood of target labels for individual samples, we propose an objective function that pushes the network to produce feature distributions which: (a) have low variance within the same class and (b) high variance between different classes. Our objective is derived from the general LDA eigenvalue problem and still allows to train with stochastic gradient descent and back-propagation. For evaluation we test our approach on three different benchmark datasets (MNIST, CIFAR-10 and STL-10). DeepLDA produces competitive results on MNIST and CIFAR-10 and outperforms a network trained with categorical cross entropy (same architecture) on a supervised setting of STL-10.",
"title": ""
},
{
"docid": "c0bf378bd6c763b83249163733c21f07",
"text": "Although videos appear to be very high-dimensional in terms of duration × frame-rate × resolution, temporal smoothness constraints ensure that the intrinsic dimensionality for videos is much lower. In this paper, we use this idea for investigating Domain Adaptation (DA) in videos, an area that remains under-explored. An approach that has worked well for the image DA is based on the subspace modeling of the source and target domains, which works under the assumption that the two domains share a latent subspace where the domain shift can be reduced or eliminated. In this paper, first we extend three subspace based image DA techniques for human action recognition and then combine it with our proposed Eclectic Domain Mixing (EDM) approach to improve the effectiveness of the DA. Further, we use discrepancy measures such as Symmetrized KL Divergence and Target Density Around Source for empirical study of the proposed EDM approach. While, this work mainly focuses on Domain Adaptation in videos, for completeness of the study, we comprehensively evaluate our approach using both object and action datasets. In this paper, we have achieved consistent improvements over chosen baselines and obtained some state-of-the-art results for the datasets.",
"title": ""
},
{
"docid": "76f11326d1a2573aae8925d63a10a1f9",
"text": "It has been widely claimed that attention and awareness are doubly dissociable and that there is no causal relation between them. In support of this view are numerous claims of attention without awareness, and awareness without attention. Although there is evidence that attention can operate on or be drawn to unconscious stimuli, various recent findings demonstrate that there is no empirical support for awareness without attention. To properly test for awareness without attention, we propose that a stimulus be studied using a battery of tests based on diverse, mainstream paradigms from the current attention literature. When this type of analysis is performed, the evidence is fully consistent with a model in which attention is necessary, but not sufficient, for awareness.",
"title": ""
},
{
"docid": "99a728e8b9a351734db9b850fe79bd61",
"text": "Predicting anchor links across social networks has important implications to an array of applications, including cross-network information diffusion and cross-domain recommendation. One challenging problem is: whether and to what extent we can address the anchor link prediction problem, if only structural information of networks is available. Most existing methods, unsupervised or supervised, directly work on networks themselves rather than on their intrinsic structural regularities, and thus their effectiveness is sensitive to the high dimension and sparsity of networks. To offer a robust method, we propose a novel supervised model, called PALE, which employs network embedding with awareness of observed anchor links as supervised information to capture the major and specific structural regularities and further learns a stable cross-network mapping for predicting anchor links. Through extensive experiments on two realistic datasets, we demonstrate that PALE significantly outperforms the state-of-the-art methods.",
"title": ""
}
] | scidocsrr |
f7aa9fe40d401b8e23e6d58dde8991f4 | Music Similarity Measures: What's the use? | [
{
"docid": "59b928fab5d53519a0a020b7461690cf",
"text": "Musical genres are categorical descriptions that are used to describe music. They are commonly used to structure the increasing amounts of music available in digital form on the Web and are important for music information retrieval. Genre categorization for audio has traditionally been performed manually. A particular musical genre is characterized by statistical properties related to the instrumentation, rhythmic structure and form of its members. In this work, algorithms for the automatic genre categorization of audio signals are described. More specifically, we propose a set of features for representing texture and instrumentation. In addition a novel set of features for representing rhythmic structure and strength is proposed. The performance of those feature sets has been evaluated by training statistical pattern recognition classifiers using real world audio collections. Based on the automatic hierarchical genre classification two graphical user interfaces for browsing and interacting with large audio collections have been developed.",
"title": ""
}
] | [
{
"docid": "6d70ac4457983c7df8896a9d31728015",
"text": "This brief presents a differential transmit-receive (T/R) switch integrated in a 0.18-mum standard CMOS technology for wireless applications up to 6 GHz. This switch design employs fully differential architecture to accommodate the design challenge of differential transceivers and improve the linearity performance. It exhibits less than 2-dB insertion loss, higher than 15-dB isolation, in a 60 mumtimes40 mum area. 15-dBm power at 1-dB compression point (P1dB) is achieved without using additional techniques to enhance the linearity. This switch is suitable for differential transceiver front-ends with a moderate power level. To the best of the authors' knowledge, this is the first reported differential T/R switch in CMOS for multistandard and wideband wireless applications",
"title": ""
},
{
"docid": "c0ddc4b83145a1ee7b252d65066b8969",
"text": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of current research. Combining such an embedding model with logic rules has recently attracted increasing attention. Most previous attempts made a one-time injection of logic rules, ignoring the interactive nature between embedding learning and logical inference. And they focused only on hard rules, which always hold with no exception and usually require extensive manual effort to create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a novel paradigm of KG embedding with iterative guidance from soft rules. RUGE enables an embedding model to learn simultaneously from 1) labeled triples that have been directly observed in a given KG, 2) unlabeled triples whose labels are going to be predicted iteratively, and 3) soft rules with various confidence levels extracted automatically from the KG. In the learning process, RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and integrates such newly labeled triples to update the embedding model. Through this iterative procedure, knowledge embodied in logic rules may be better transferred into the learned embeddings. We evaluate RUGE in link prediction on Freebase and YAGO. Experimental results show that: 1) with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and 2) despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels. The code and data used for this paper can be obtained from https://github.com/iieir-km/RUGE.",
"title": ""
},
{
"docid": "2793e8eb1410b2379a8a416f0560df0a",
"text": "Alzheimer’s disease (AD) transgenic mice have been used as a standard AD model for basic mechanistic studies and drug discovery. These mouse models showed symbolic AD pathologies including β-amyloid (Aβ) plaques, gliosis and memory deficits but failed to fully recapitulate AD pathogenic cascades including robust phospho tau (p-tau) accumulation, clear neurofibrillary tangles (NFTs) and neurodegeneration, solely driven by familial AD (FAD) mutation(s). Recent advances in human stem cell and three-dimensional (3D) culture technologies made it possible to generate novel 3D neural cell culture models that recapitulate AD pathologies including robust Aβ deposition and Aβ-driven NFT-like tau pathology. These new 3D human cell culture models of AD hold a promise for a novel platform that can be used for mechanism studies in human brain-like environment and high-throughput drug screening (HTS). In this review, we will summarize the current progress in recapitulating AD pathogenic cascades in human neural cell culture models using AD patient-derived induced pluripotent stem cells (iPSCs) or genetically modified human stem cell lines. We will also explain how new 3D culture technologies were applied to accelerate Aβ and p-tau pathologies in human neural cell cultures, as compared the standard two-dimensional (2D) culture conditions. Finally, we will discuss a potential impact of the human 3D human neural cell culture models on the AD drug-development process. These revolutionary 3D culture models of AD will contribute to accelerate the discovery of novel AD drugs.",
"title": ""
},
{
"docid": "c43b77b56a6e2cb16a6b85815449529d",
"text": "We propose a new method for clustering multivariate time series. A univariate time series can be represented by a fixed-length vector whose components are statistical features of the time series, capturing the global structure. These descriptive vectors, one for each component of the multivariate time series, are concatenated, before being clustered using a standard fast clustering algorithm such as k-means or hierarchical clustering. Such statistical feature extraction also serves as a dimension-reduction procedure for multivariate time series. We demonstrate the effectiveness and simplicity of our proposed method by clustering human motion sequences: dynamic and high-dimensional multivariate time series. The proposed method based on univariate time series structure and statistical metrics provides a novel, yet simple and flexible way to cluster multivariate time series data efficiently with promising accuracy. The success of our method on the case study suggests that clustering may be a valuable addition to the tools available for human motion pattern recognition research.",
"title": ""
},
{
"docid": "4a5131ec6e40545765e400d738441376",
"text": "Experiments have been performed to investigate the operating modes of a generator of 2/spl times/500-ps bipolar high-voltage, nanosecond pulses with the double amplitude (270 kV) close to that of the charge pulse of the RADAN-303 nanosecond driver. The generator contains an additional peaker shortening the risetime of the starting pulse and a pulse-forming line with two untriggered gas gaps operating with a total jitter of 200 ps.",
"title": ""
},
{
"docid": "ae7117416b4a07d2b15668c2c8ac46e3",
"text": "We present OntoWiki, a tool providing support for agile, distributed knowledge engineering scenarios. OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. It fosters social collaboration aspects by keeping track of changes, allowing comments and discussion on every single part of a knowledge base, enabling to rate and measure the popularity of content and honoring the activity of users. OntoWiki enhances the browsing and retrieval by offering semantic enhanced search strategies. All these techniques are applied with the ultimate goal of decreasing the entrance barrier for projects and domain experts to collaborate using semantic technologies. In the spirit of the Web 2.0 OntoWiki implements an ”architecture of participation” that allows users to add value to the application as they use it. It is available as open-source software and a demonstration platform can be accessed at http://3ba.se.",
"title": ""
},
{
"docid": "d95ae6900ae353fa0ed32167e0c23f16",
"text": "As well known, fully convolutional network (FCN) becomes the state of the art for semantic segmentation in deep learning. Currently, new hardware designs for deep learning have focused on improving the speed and parallelism of processing units. This motivates memristive solutions, in which the memory units (i.e., memristors) have computing capabilities. However, designing a memristive deep learning network is challenging, since memristors work very differently from the traditional CMOS hardware. This paper proposes a complete solution to implement memristive FCN (MFCN). Voltage selectors are firstly utilized to realize max-pooling layers with the detailed MFCN deconvolution hardware circuit by the massively parallel structure, which is effective since the deconvolution kernel and the input feature are similar in size. Then, deconvolution calculation is realized by converting the image into a column matrix and converting the deconvolution kernel into a sparse matrix. Meanwhile, the convolution realization in MFCN is also studied with the traditional sliding window method rather than the large matrix theory to overcome the shortcoming of low efficiency. Moreover, the conductance values of memristors are predetermined in Tensorflow with ex-situ training method. In other words, we train MFCN in software, then download the trained parameters to the simulink system by writing memristor. The effectiveness of the designed MFCN scheme is verified with improved accuracy over some existing machine learning methods. The proposed scheme is also adapt to LFW dataset with three-classification tasks. However, the MFCN training is time consuming as the computational burden is heavy with thousands of weight parameters with just six layers. In future, it is necessary to sparsify the weight parameters and layers of the MFCN network to speed up computing.",
"title": ""
},
{
"docid": "60d90ae1407c86559af63f20536202dc",
"text": "TCP Westwood (TCPW) is a sender-side modification of the TCP congestion window algorithm that improves upon the performance of TCP Reno in wired as well as wireless networks. The improvement is most significant in wireless networks with lossy links. In fact, TCPW performance is not very sensitive to random errors, while TCP Reno is equally sensitive to random loss and congestion loss and cannot discriminate between them. Hence, the tendency of TCP Reno to overreact to errors. An important distinguishing feature of TCP Westwood with respect to previous wireless TCP “extensions” is that it does not require inspection and/or interception of TCP packets at intermediate (proxy) nodes. Rather, TCPW fully complies with the end-to-end TCP design principle. The key innovative idea is to continuously measure at the TCP sender side the bandwidth used by the connection via monitoring the rate of returning ACKs. The estimate is then used to compute congestion window and slow start threshold after a congestion episode, that is, after three duplicate acknowledgments or after a timeout. The rationale of this strategy is simple: in contrast with TCP Reno which “blindly” halves the congestion window after three duplicate ACKs, TCP Westwood attempts to select a slow start threshold and a congestion window which are consistent with the effective bandwidth used at the time congestion is experienced. We call this mechanism faster recovery. The proposed mechanism is particularly effective over wireless links where sporadic losses due to radio channel problems are often misinterpreted as a symptom of congestion by current TCP schemes and thus lead to an unnecessary window reduction. Experimental studies reveal improvements in throughput performance, as well as in fairness. In addition, friendliness with TCP Reno was observed in a set of experiments showing that TCP Reno connections are not starved by TCPW connections. Most importantly, TCPW is extremely effective in mixed wired and wireless networks where throughput improvements of up to 550% are observed. Finally, TCPW performs almost as well as localized link layer approaches such as the popular Snoop scheme, without incurring the overhead of a specialized link layer protocol.",
"title": ""
},
{
"docid": "136deaa8656bdb1c2491de4effd09838",
"text": "The fabrication technology advancements lead to place more logic on a silicon die which makes verification more challenging task than ever. The large number of resources is required because more than 70% of the design cycle is used for verification. Universal Verification Methodology was developed to provide a well structured and reusable verification environment which does not interfere with the device under test (DUT). This paper contrasts the reusability of I2C using UVM and introduces how the verification environment is constructed and test cases are implemented for this protocol.",
"title": ""
},
{
"docid": "8fbb53199fab6383b8dd01347d62cf86",
"text": "In this paper, we analyze ring oscillator (RO) based physical unclonable function (PUF) on FPGAs. We show that the systematic process variation adversely affects the ability of the RO-PUF to generate unique chip-signatures, and propose a compensation method to mitigate it. Moreover, a configurable ring oscillator (CRO) technique is proposed to reduce noise in PUF responses. Our compensation method could improve the uniqueness of the PUF by an amount as high as 18%. The CRO technique could produce nearly 100% error-free PUF outputs over varying environmental conditions without post-processing while consuming minimum area.",
"title": ""
},
{
"docid": "26b13a3c03014fc910ed973c264e4c9d",
"text": "Deep convolutional neural networks (CNNs) have shown great potential for numerous real-world machine learning applications, but performing inference in large CNNs in real-time remains a challenge. We have previously demonstrated that traditional CNNs can be converted into deep spiking neural networks (SNNs), which exhibit similar accuracy while reducing both latency and computational load as a consequence of their data-driven, event-based style of computing. Here we provide a novel theory that explains why this conversion is successful, and derive from it several new tools to convert a larger and more powerful class of deep networks into SNNs. We identify the main sources of approximation errors in previous conversion methods, and propose simple mechanisms to fix these issues. Furthermore, we develop spiking implementations of common CNN operations such as max-pooling, softmax, and batch-normalization, which allow almost loss-less conversion of arbitrary CNN architectures into the spiking domain. Empirical evaluation of different network architectures on the MNIST and CIFAR10 benchmarks leads to the best SNN results reported to date.",
"title": ""
},
{
"docid": "2b7ac1941127e1d47401d67e6d7856de",
"text": "Alert correlation is an important technique for managing large the volume of intrusion alerts that are raised by heterogenous Intrusion Detection Systems (IDSs). The recent trend of research in this area is towards extracting attack strategies from raw intrusion alerts. It is generally believed that pure intrusion detection no longer can satisfy the security needs of organizations. Intrusion response and prevention are now becoming crucially important for protecting the network and minimizing damage. Knowing the real security situation of a network and the strategies used by the attackers enables network administrators to launches appropriate response to stop attacks and prevent them from escalating. This is also the primary goal of using alert correlation technique. However, most of the current alert correlation techniques only focus on clustering inter-connected alerts into different groups without further analyzing the strategies of the attackers. Some techniques for extracting attack strategies have been proposed in recent years, but they normally require defining a larger number of rules. This paper focuses on developing a new alert correlation technique that can help to automatically extract attack strategies from a large volume of intrusion alerts, without specific prior knowledge about these alerts. The proposed approach is based on two different neural network approaches, namely, Multilayer Perceptron (MLP) and Support Vector Machine (SVM). The probabilistic output of these two methods is used to determine with which previous alerts this current alert should be correlated. This suggests the causal relationship of two alerts, which is helpful for constructing attack scenarios. One of the distinguishing feature of the proposed technique is that an Alert Correlation Matrix (ACM) is used to store correlation strengthes of any two types of alerts. ACM is updated in the training process, and the information (correlation strength) is then used for extracting high level attack strategies.",
"title": ""
},
{
"docid": "b8cec6cfbc55c9fd6a7d5ed951bcf4eb",
"text": "Increasingly large amount of multidimensional data are being generated on a daily basis in many applications. This leads to a strong demand for learning algorithms to extract useful information from these massive data. This paper surveys the field of multilinear subspace learning (MSL) for dimensionality reduction of multidimensional data directly from their tensorial representations. It discusses the central issues of MSL, including establishing the foundations of the field via multilinear projections, formulating a unifying MSL framework for systematic treatment of the problem, examining the algorithmic aspects of typical MSL solutions, and categorizing both unsupervised and supervised MSL algorithms into taxonomies. Lastly, the paper summarizes a wide range of MSL applications and concludes with perspectives on future research directions.",
"title": ""
},
{
"docid": "b25b7100c035ad2953fb43087ede1625",
"text": "In this paper, a novel 10W substrate integrated waveguide (SIW) high power amplifier (HPA) designed with SIW matching network (MN) is presented. The SIW MN is connected with microstrip line using microstrip-to-SIW transition. An inductive metallized post in SIW is employed to realize impedance matching. At the fundamental frequency of 2.14 GHz, the impedance matching is realized by moving the position of the inductive metallized post in the SIW. Both the input and output MNs are designed with the proposed SIW-based MN concept. One SIW-based 10W HPA using GaN HEMT at 2.14 GHz is designed, fabricated, and measured. The proposed SIW-based HPA can be easily connected with any microstrip circuit with microstrip-to-SIW transition. Measured results show that the maximum power added efficiency (PAE) is 65.9 % with 39.8 dBm output power and the maximum gain is 20.1 dB with 30.9 dBm output power at 2.18 GHz. The size of the proposed SIW-based HPA is comparable with other microstrip-based PAs designed at the operating frequency.",
"title": ""
},
{
"docid": "529ca36809a7052b9495279aa1081fcc",
"text": "To effectively control complex dynamical systems, accurate nonlinear models are typically needed. However, these models are not always known. In this paper, we present a data-driven approach based on Gaussian processes that learns models of quadrotors operating in partially unknown environments. What makes this challenging is that if the learning process is not carefully controlled, the system will go unstable, i.e., the quadcopter will crash. To this end, barrier certificates are employed for safe learning. The barrier certificates establish a non-conservative forward invariant safe region, in which high probability safety guarantees are provided based on the statistics of the Gaussian Process. A learning controller is designed to efficiently explore those uncertain states and expand the barrier certified safe region based on an adaptive sampling scheme. Simulation results are provided to demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "613b014ea02019a78be488a302ff4794",
"text": "In this study, the robustness of approaches to the automatic classification of emotions in speech is addressed. Among the many types of emotions that exist, two groups of emotions are considered, adult-to-adult acted vocal expressions of common types of emotions like happiness, sadness, and anger and adult-to-infant vocal expressions of affective intents also known as ‘‘motherese’’. Specifically, we estimate the generalization capability of two feature extraction approaches, the approach developed for Sony’s robotic dog AIBO (AIBO) and the segment-based approach (SBA) of [Shami, M., Kamel, M., 2005. Segment-based approach to the recognition of emotions in speech. In: IEEE Conf. on Multimedia and Expo (ICME05), Amsterdam, The Netherlands]. Three machine learning approaches are considered, K-nearest neighbors (KNN), Support vector machines (SVM) and Ada-boosted decision trees and four emotional speech databases are employed, Kismet, BabyEars, Danish, and Berlin databases. Single corpus experiments show that the considered feature extraction approaches AIBO and SBA are competitive on the four databases considered and that their performance is comparable with previously published results on the same databases. The best choice of machine learning algorithm seems to depend on the feature extraction approach considered. Multi-corpus experiments are performed with the Kismet–BabyEars and the Danish–Berlin database pairs that contain parallel emotional classes. Automatic clustering of the emotional classes in the database pairs shows that the patterns behind the emotions in the Kismet–BabyEars pair are less database dependent than the patterns in the Danish–Berlin pair. In off-corpus testing the classifier is trained on one database of a pair and tested on the other. This provides little improvement over baseline classification. In integrated corpus testing, however, the classifier is machine learned on the merged databases and this gives promisingly robust classification results, which suggest that emotional corpora with parallel emotion classes recorded under different conditions can be used to construct a single classifier capable of distinguishing the emotions in the merged corpora. Such a classifier is more robust than a classifier learned on a single corpus as it can recognize more varied expressions of the same emotional classes. These findings suggest that the existing approaches for the classification of emotions in speech are efficient enough to handle larger amounts of training data without any reduction in classification accuracy. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2549ed70fd2e06c749bf00193dad1f4d",
"text": "Phenylketonuria (PKU) is an inborn error of metabolism caused by deficiency of the hepatic enzyme phenylalanine hydroxylase (PAH) which leads to high blood phenylalanine (Phe) levels and consequent damage of the developing brain with severe mental retardation if left untreated in early infancy. The current dietary Phe restriction treatment has certain clinical limitations. To explore a long-term nondietary restriction treatment, a somatic gene transfer approach in a PKU mouse model (C57Bl/6-Pahenu2) was employed to examine its preclinical feasibility. A recombinant adeno-associated virus (rAAV) vector containing the murine Pah-cDNA was generated, pseudotyped with capsids from AAV serotype 8, and delivered into the liver of PKU mice via single intraportal or tail vein injections. The blood Phe concentrations decreased to normal levels (⩽100 μM or 1.7 mg/dl) 2 weeks after vector application, independent of the sex of the PKU animals and the route of application. In particular, the therapeutic long-term correction in females was also dramatic, which had previously been shown to be difficult to achieve. Therapeutic ranges of Phe were accompanied by the phenotypic reversion from brown to black hair. In treated mice, PAH enzyme activity in whole liver extracts reversed to normal and neither hepatic toxicity nor immunogenicity was observed. In contrast, a lentiviral vector expressing the murine Pah-cDNA, delivered via intraportal vein injection into PKU mice, did not result in therapeutic levels of blood Phe. This study demonstrates the complete correction of hyperphenylalaninemia in both males and females with a rAAV serotype 8 vector. More importantly, the feasibility of a single intravenous injection may pave the way to develop a clinical gene therapy procedure for PKU patients.",
"title": ""
},
{
"docid": "87f05972a93b2b432d0dad6d55e97502",
"text": "The daunting volumes of community-contributed media contents on the Internet have become one of the primary sources for online advertising. However, conventional advertising treats image and video advertising as general text advertising by displaying relevant ads based on the contents of the Web page, without considering the inherent characteristics of visual contents. This article presents a contextual advertising system driven by images, which automatically associates relevant ads with an image rather than the entire text in a Web page and seamlessly inserts the ads in the nonintrusive areas within each individual image. The proposed system, called ImageSense, supports scalable advertising of, from root to node, Web sites, pages, and images. In ImageSense, the ads are selected based on not only textual relevance but also visual similarity, so that the ads yield contextual relevance to both the text in the Web page and the image content. The ad insertion positions are detected based on image salience, as well as face and text detection, to minimize intrusiveness to the user. We evaluate ImageSense on a large-scale real-world images and Web pages, and demonstrate the effectiveness of ImageSense for online image advertising.",
"title": ""
},
{
"docid": "0d1da055e444a90ec298a2926de9fe7b",
"text": "Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies.",
"title": ""
},
{
"docid": "3115c716a065334dc0cdec9e33e24149",
"text": "With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today’s increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.",
"title": ""
}
] | scidocsrr |
92514f8cc405618220b16a45ee7c8490 | Autonomous Vehicle Security: A Taxonomy of Attacks and Defences | [
{
"docid": "678d9eab7d1e711f97bf8ef5aeaebcc4",
"text": "This work presents a study of current and future bus systems with respect to their security against various malicious attacks. After a brief description of the most well-known and established vehicular communication systems, we present feasible attacks and potential exposures for these automotive networks. We also provide an approach for secured automotive communication based on modern cryptographic mechanisms that provide secrecy, manipulation prevention and authentication to solve most of the vehicular bus security issues.",
"title": ""
},
{
"docid": "97501db2db0fb83fef5cf4e30d1728d8",
"text": "Autonomous automated vehicles are the next evolution in transportation and will improve safety, traffic efficiency and driving experience. Automated vehicles are equipped with multiple sensors (LiDAR, radar, camera, etc.) enabling local awareness of their surroundings. A fully automated vehicle will unconditionally rely on its sensors readings to make short-term (i.e. safety-related) and long-term (i.e. planning) driving decisions. In this context, sensors have to be robust against intentional or unintentional attacks that aim at lowering sensor data quality to disrupt the automation system. This paper presents remote attacks on camera-based system and LiDAR using commodity hardware. Results from laboratory experiments show effective blinding, jamming, replay, relay, and spoofing attacks. We propose software and hardware countermeasures that improve sensors resilience against these attacks.",
"title": ""
},
{
"docid": "400dce50037a38d19a3057382d9246b5",
"text": "A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.",
"title": ""
}
] | [
{
"docid": "b140f08d25d5c37c4fa8743333664af2",
"text": " Random walks on an association graph using candidate matches as nodes. Rank candidate matches by stationary distribution Personalized jump for enforcing the matching constraints during the random walks process Matching constraints satisfying reweighting vector is calculated iteratively by inflation and bistochastic normalization Due to object motion or viewpoint change, relationships between two nodes are not exactly same Outlier Noise Deformation Noise",
"title": ""
},
{
"docid": "87e4bc893f46efdb50416e8386501d80",
"text": "the boom in the technology has resulted in emergence of new concepts and challenges. Big data is one of those spoke about terms today. Big data is becoming a synonym for competitive advantages in business rivalries. Despite enormous benefits, big data accompanies some serious challenges and when it comes to analyzing of big data, it requires some serious thought. This study explores Big Data terminology and its analysis concepts using sample from Twitter data with the help of one of the most industry trusted real time processing and fault tolerant tool called Apache Storm. Keywords— Big Data, Apache Storm, real-time processing, open Source.",
"title": ""
},
{
"docid": "c071d5a7ff1dbfd775e9ffdee1b07662",
"text": "OBJECTIVES\nComplete root coverage is the primary objective to be accomplished when treating gingival recessions in patients with aesthetic demands. Furthermore, in order to satisfy patient demands fully, root coverage should be accomplished by soft tissue, the thickness and colour of which should not be distinguishable from those of adjacent soft tissue. The aim of the present split-mouth study was to compare the treatment outcome of two surgical approaches of the bilaminar procedure in terms of (i) root coverage and (ii) aesthetic appearance of the surgically treated sites.\n\n\nMATERIAL AND METHODS\nFifteen young systemically and periodontally healthy subjects with two recession-type defects of similar depth affecting contralateral teeth in the aesthetic zone of the maxilla were enrolled in the study. All recessions fall into Miller class I or II. Randomization for test and control treatment was performed by coin toss immediately prior to surgery. All defects were treated with a bilaminar surgical technique: differences between test and control sites resided in the size, thickness and positioning of the connective tissue graft. The clinical re-evaluation was made 1 year after surgery.\n\n\nRESULTS\nThe two bilaminar techniques resulted in a high percentage of root coverage (97.3% in the test and 94.7% in the control group) and complete root coverage (gingival margin at the cemento-enamel junction (CEJ)) (86.7% in the test and 80% in the control teeth), with no statistically significant difference between them. Conversely, better aesthetic outcome and post-operative course were indicated by the patients for test compared to control sites.\n\n\nCONCLUSIONS\nThe proposed modification of the bilaminar technique improved the aesthetic outcome. The reduced size and minimal thickness of connective tissue graft, together with its positioning apical to the CEJ, facilitated graft coverage by means of the coronally advanced flap.",
"title": ""
},
{
"docid": "f2edf7cc3671b38ae5f597e840eda3a2",
"text": "This paper describes the process of creating a design pattern management interface for a collection of mobile design patterns. The need to communicate how patterns are interrelated and work together to create solutions motivated the creation of this interface. Currently, most design pattern collections are presented in alphabetical lists. The Oracle Mobile User Experience team approach is to communicate relationships visually by highlighting and connecting related patterns. Before the team designed the interface, we first analyzed common relationships between patterns and created a pattern language map. Next, we organized the patterns into conceptual design categories. Last, we designed a pattern management interface that enables users to browse patterns and visualize their relationships.",
"title": ""
},
{
"docid": "1580e188796e4e7b6c5930e346629849",
"text": "This paper describes the development process of FarsNet; a lexical ontology for the Persian language. FarsNet is designed to contain a Persian WordNet with about 10000 synsets in its first phase and grow to cover verbs' argument structures and their selectional restrictions in its second phase. In this paper we discuss the semi-automatic approach to create the first phase: the Persian WordNet.",
"title": ""
},
{
"docid": "68f3b3521b426b696419a58e6d389aae",
"text": "A new scan that matches an aided Inertial Navigation System (INS) with a low-cost LiDAR is proposed as an alternative to GNSS-based navigation systems in GNSS-degraded or -denied environments such as indoor areas, dense forests, or urban canyons. In these areas, INS-based Dead Reckoning (DR) and Simultaneous Localization and Mapping (SLAM) technologies are normally used to estimate positions as separate tools. However, there are critical implementation problems with each standalone system. The drift errors of velocity, position, and heading angles in an INS will accumulate over time, and on-line calibration is a must for sustaining positioning accuracy. SLAM performance is poor in featureless environments where the matching errors can significantly increase. Each standalone positioning method cannot offer a sustainable navigation solution with acceptable accuracy. This paper integrates two complementary technologies-INS and LiDAR SLAM-into one navigation frame with a loosely coupled Extended Kalman Filter (EKF) to use the advantages and overcome the drawbacks of each system to establish a stable long-term navigation process. Static and dynamic field tests were carried out with a self-developed Unmanned Ground Vehicle (UGV) platform-NAVIS. The results prove that the proposed approach can provide positioning accuracy at the centimetre level for long-term operations, even in a featureless indoor environment.",
"title": ""
},
{
"docid": "6f62c4d2e56563f6edcf989beb2f2d41",
"text": "We review the formalism of holographic renormalization. We start by discussing mathematical results on asymptotically anti-de Sitter spacetimes. We then outline the general method of holographic renormalization. The method is illustrated by working all details in a simple example: a massive scalar field on anti-de Sitter spacetime. The discussion includes the derivation of the on-shell renormalized action, of holographic Ward identities, anomalies and RG equations, and the computation of renormalized one-, twoand four-point functions. We then discuss the application of the method to holographic RG flows. We also show that the results of the near-boundary analysis of asymptotically AdS spacetimes can be analytically continued to apply to asymptotically de Sitter spacetimes. In particular, it is shown that the Brown-York stress energy tensor of de Sitter spacetime is equal, up to a dimension dependent sign, to the Brown-York stress energy tensor of an associated AdS spacetime. kostas@feynman.princeton.edu",
"title": ""
},
{
"docid": "4a007ef89bac5daf90463afa68e663ed",
"text": "OBJECTIVE\nTo compare the short-term effect and advantage of transforaminal epidural steroid injection (TFESI) performed using the Kambin's triangle and subpedicular approaches.\n\n\nMETHOD\nForty-two patients with radicular pain from lumbar spinal stenosis were enrolled. Subjects were randomly assigned to one of two groups. All procedures were performed using C-arm KMC 950. The frequency of complications during the procedure and the effect of TFESI at 2 and 4 weeks after the procedure between the two groups were compared. Short-term outcomes were measured using a visual numeric scale (VNS) and a five-grade scale. Multiple logistic regression analyses were performed to evaluate the relationship between possible outcome predictors (Kambin's triangle or subpedicular approach, age, duration of symptoms and sex) and the therapeutic effect.\n\n\nRESULTS\nVNS was improved 2 weeks after the injection and continued to improve until 4 weeks in both groups. There were no statistical differences in changes of VNS, effectiveness and contrast spread pattern between these two groups. No correlation was found between the other variables tested and therapeutic effect. Spinal nerve pricking occurred in five cases of the subpedicular and in none of the cases of the Kambin's triangle approach (p<0.05).\n\n\nCONCLUSION\nThe Kambin's triangle approach is as efficacious as the subpedicular approach for short-term effect and offers considerable advantages (i.e., less spinal nerve pricking during procedure). The Kambin's triangle approach maybe an alternative method for transforaminal epidural steroid injection in cases where needle tip positioning in the anterior epidural space is difficult.",
"title": ""
},
{
"docid": "947d4c60427377bcb466fe1393c5474c",
"text": "This paper presents a single BCD technology platform with high performance power devices at a wide range of operating voltages. The platform offers 6 V to 70 V LDMOS devices. All devices offer best-in-class specific on-resistance of 20 to 40 % lower than that of the state-of-the-art IC-based LDMOS devices and robustness better than the square SOA (safe-operating-area). Fully isolated LDMOS devices, in which independent bias is capable for circuit flexibility, demonstrate superior specific on-resistance (e.g. 11.9 mΩ-mm2 for breakdown voltage of 39 V). Moreover, the unusual sudden current enhancement appeared in the ID-VD saturation region of most of the high voltage LDMOS devices is significantly suppressed.",
"title": ""
},
{
"docid": "494bc0a3ab30c86853de630ae632b3d4",
"text": "Although the biomechanical properties of the various types of running foot strike (rearfoot, midfoot, and forefoot) have been studied extensively in the laboratory, only a few studies have attempted to quantify the frequency of running foot strike variants among runners in competitive road races. We classified the left and right foot strike patterns of 936 distance runners, most of whom would be considered of recreational or sub-elite ability, at the 10 km point of a half-marathon/marathon road race. We classified 88.9% of runners at the 10 km point as rearfoot strikers, 3.4% as midfoot strikers, 1.8% as forefoot strikers, and 5.9% of runners exhibited discrete foot strike asymmetry. Rearfoot striking was more common among our sample of mostly recreational distance runners than has been previously reported for samples of faster runners. We also compared foot strike patterns of 286 individual marathon runners between the 10 km and 32 km race locations and observed increased frequency of rearfoot striking at 32 km. A large percentage of runners switched from midfoot and forefoot foot strikes at 10 km to rearfoot strikes at 32 km. The frequency of discrete foot strike asymmetry declined from the 10 km to the 32 km location. Among marathon runners, we found no significant relationship between foot strike patterns and race times.",
"title": ""
},
{
"docid": "469e5c159900b9d6662a9bfe9e01fde7",
"text": "In the research of rule extraction from neural networks,fidelity describes how well the rules mimic the behavior of a neural network whileaccuracy describes how well the rules can be generalized. This paper identifies thefidelity-accuracy dilemma. It argues to distinguishrule extraction using neural networks andrule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.",
"title": ""
},
{
"docid": "1851533953769821423580614feae837",
"text": "This work presents a 54 Gb/s monolithically integrated silicon photonics receiver (Rx). A germanium photodiode (Ge-PD) is monolithically integrated with a transimpedance amplifier (TIA) and low frequency feedback loop to compensate for the DC input overload current. Bandwidth enhancement techniques are used to extend the bandwidth compared to previously published monolithically integrated receivers. Implemented in a 0.25 μm SiGe:C BiCMOS electronic/photonic integrated circuit (EPIC) technology, the Rx operates at λ=1.55 μm, achieves an optical/electrical (O/E) bandwidth of 47GHz with only ±5ps group delay variation and a sensitivity of 0.2dBm for 4.5×10-11 BER at 40 Gb/s and 0.97dBm for 1.05×10-6 BER at 54 Gb/s. It dissipates 73mW of power, while occupying 1.6mm2 of area. To the best of the author's knowledge, this work presents the state-of-the-art bandwidth and bit rate in monolithically integrated photonic receivers.",
"title": ""
},
{
"docid": "2bf678c98d27501443f0f6fdf35151d7",
"text": "The goal of video summarization is to distill a raw video into a more compact form without losing much semantic information. However, previous methods mainly consider the diversity and representation interestingness of the obtained summary, and they seldom pay sufficient attention to semantic information of resulting frame set, especially the long temporal range semantics. To explicitly address this issue, we propose a novel technique which is able to extract the most semantically relevant video segments (i.e., valid for a long term temporal duration) and assemble them into an informative summary. To this end, we develop a semantic attended video summarization network (SASUM) which consists of a frame selector and video descriptor to select an appropriate number of video shots by minimizing the distance between the generated description sentence of the summarized video and the human annotated text of the original video. Extensive experiments show that our method achieves a superior performance gain over previous methods on two benchmark datasets.",
"title": ""
},
{
"docid": "31a15c44bca39611b14c113bb5063ee3",
"text": "Due to the large amounts of text data being generated and organizations increased rapidly with the availability of Big Data platforms, there is no enough time to read and understand each document and make decisions based on document contents. Hence, there is a great demand for summarizing text documents to provide a representative substitute for the original documents. Text summarization is the task of reducing a text document with help of a computer program in order to create a summary that keeps the most important points of the original document. It is not easy task for people to manually summarize large documents of text data. There are two methods to summarize a text data 1) extractive summarization and 2) abstractive summarization. An extractive summarization method selects important sentence, content etc. from the original text data and combined them into shorter form. An abstractive summarization method understand the original text data and re-telling it in fewer words to generate summary. Popularity and growth of E-commerce has led to the creation of several websites that market and sell products as well as allow customer to post reviews and that reviews about a product grows rapidly. The number of reviews can be more than hundreds or thousands for a particular product. So it is difficult for a potential customer to take decision about products by reading all reviews manually and if he reads only a few of those reviews, then he may get a biased and confused view about the product. Product reviews are written by persons is not in structured form, natural language text; the process of summarizing them is one challenge and has a great commercial importance. With a feature-based summary, a current customer can easily see what other customer feels about product and can take decision. For a maker of a product, it is possible to combine summaries from multiple web sources to produce a single report for each product. In this paper, we present a survey and Comparative Analysis of Text Summarization Approach for Unstructured Customer Reviews in the field of E-commerce. Keywords— Text Mining, Text summarization, Product Review, Information Retrieval, E-commerce, opinion analysis.",
"title": ""
},
{
"docid": "69c0b722d5492415046ac28d55f0914b",
"text": "BACKGROUND\nAllergic contact dermatitis caused by (meth)acrylates is well known, both in occupational and in non-occupational settings. Contact hypersensitivity to electrocardiogram (ECG) electrodes containing (meth)acrylates is rarely reported.\n\n\nOBJECTIVE\nTo report the first case of contact dermatitis caused by acrylic acid impurity in ECG electrodes.\n\n\nMATERIALS AND METHODS\nPatch tests were performed with separate components of electrodes and some (meth)acrylates. This was followed by high-performance liquid chromatography of electrode hydrogel.\n\n\nRESULTS\nThe patient was contact-allergic to electrode hydrogel but not to its separate constituents. Positive reactions were observed to 2-hydroxyethyl methacrylate (2-HEMA), 2-hydroxypropyl methacrylate (2-HPMA) and ethyleneglycol dimethacrylate (EGDMA). Subsequent analysis showed that the electrode hydrogel contained acrylic acid as an impurity. The latter was subsequently patch tested, with a positive result.\n\n\nCONCLUSION\nThe sensitization resulting from direct contact with ECG electrodes was caused by acrylic acid, present as an impurity in ECG electrodes. Positive reactions to 2-HEMA, 2-HPMA and EGDMA are considered to be cross-reactions.",
"title": ""
},
{
"docid": "83f13e90a0f0997a823d25534b6fc629",
"text": "High-frequency-link (HFL) power conversion systems (PCSs) are attracting more and more attentions in academia and industry for high power density, reduced weight, and low noise without compromising efficiency, cost, and reliability. In HFL PCSs, dual-active-bridge (DAB) isolated bidirectional dc-dc converter (IBDC) serves as the core circuit. This paper gives an overview of DAB-IBDC for HFL PCSs. First, the research necessity and development history are introduced. Second, the research subjects about basic characterization, control strategy, soft-switching solution and variant, as well as hardware design and optimization are reviewed and analyzed. On this basis, several typical application schemes of DAB-IBDC for HPL PCSs are presented in a worldwide scope. Finally, design recommendations and future trends are presented. As the core circuit of HFL PCSs, DAB-IBDC has wide prospects. The large-scale practical application of DAB-IBDC for HFL PCSs is expected with the recent advances in solid-state semiconductors, magnetic and capacitive materials, and microelectronic technologies.",
"title": ""
},
{
"docid": "dbbea89ac8120ee84b3174207bddcdb7",
"text": "Recently, due to the huge growth of web pages, social media and modern applications, text clustering technique has emerged as a significant task to deal with a huge amount of text documents. Some web pages are easily browsed and tidily presented via applying the clustering technique in order to partition the documents into a subset of homogeneous clusters. In this paper, two novel text clustering algorithms based on krill herd (KH) algorithm are proposed to improve the web text documents clustering. In the first method, the basic KH algorithm with all its operators is utilized while in the second method, the genetic operators in the basic KH algorithm are neglected. The performance of the proposed KH algorithms is analyzed and compared with the k-mean algorithm. The experiments were conducted using four standard benchmark text datasets. The results showed that the proposed KH algorithms outperformed the k-mean algorithm in term of clusters quality that is evaluated using two common clustering measures, namely, Purity and Entropy.",
"title": ""
},
{
"docid": "5a0fe40414f7881cc262800a43dfe4d0",
"text": "In this work, a passive rectifier circuit is presented, which is operating at 868 MHz. It allows energy harvesting from low power RF waves with a high efficiency. It consists of a novel multiplier circuit design and high quality components to reduce parasitic effects, losses and reaches a low startup voltage. Using lower capacitor rises up the switching speed of the whole circuit. An inductor L serves to store energy in a magnetic field during the negative cycle wave and returns it during the positive one. A low pass filter is arranged in cascade with the rectifier circuit to reduce ripple at high frequencies and to get a stable DC signal. A 50 kΩ load is added at the output to measure the output power and to visualize the behavior of the whole circuit. Simulation results show an outstanding potential of this RF-DC converter witch has a relative high sensitivity beginning with -40 dBm.",
"title": ""
},
{
"docid": "595052e154117ce66202a1a82e0a4072",
"text": "This paper presents the design of a new haptic feedback device for transradial myoelectric upper limb prosthesis that allows the amputee person to perceive the sensation of force-gripping and object-sliding. The system designed has three mechanical-actuator units to convey the sensation of force, and one vibrotactile unit to transmit the sensation of object sliding. The device designed will be placed on the user's amputee forearm. In order to validate the design of the structure, a stress analysis through Finite Element Method (FEM) is conducted.",
"title": ""
},
{
"docid": "72f9d32f241992d02990a7a2e9aad9bb",
"text": "— Improved methods are proposed for disk drive failure prediction. The SMART (Self Monitoring and Reporting Technology) failure prediction system is currently implemented in disk drives. Its purpose is to predict the near-term failure of an individual hard disk drive, and issue a backup warning to prevent data loss. Two experimentally tests of SMART showed only moderate accuracy at low false alarm rates. (A rate of 0.2% of total drives per year implies that 20% of drive returns would be good drives, relative to ≈1% annual failure rate of drives). This requirement for very low false alarm rates is well known in medical diagnostic tests for rare diseases, and methodology used there suggests ways to improve SMART. ACRONYMS ATA Standard drive interface, desktop computers FA Failure analysis of apparently failed drive FAR False alarm rate, 100 times probability value MVRS Multivariate rank sum statistical test NPF Drive failed, but “No problem found” in FA RS Rank sum statistical hypothesis test R Sum of ranks of warning set data Rc Predict fail if R> Rc critical value SCSI Standard drive interface, high-end computers SMART “Self monitoring and reporting technology” WA Failure warning accuracy (probability) Two improved SMART algorithms are proposed here. They use the SMART internal drive attribute measurements in present drives. The present warning algorithm based on maximum error thresholds is replaced by distribution-free statistical hypothesis tests. These improved algorithms are computationally simple enough to be implemented in drive microprocessor firmware code. They require only integer sort operations to put several hundred attribute values in rank order. Some tens of these ranks are added up and the SMART warning is issued if the sum exceeds a prestored limit. NOTATION: n Number of reference (old) measurements m Number of warning (new) measurements N Total ranked measurements (n+m) p Number of different attributes measured Q(X) Normal probability Pr(x>X) RS Rank sum statistical hypothesis test R Sum of ranks of warning set data Rc Predict fail if R> Rc critical value",
"title": ""
}
] | scidocsrr |
cb09844b251fc81d12f255fabf2fd246 | Electrodes for transcutaneous (surface) electrical stimulation | [
{
"docid": "66dc20e12d8b6b99b67485203293ad07",
"text": "A parametric model was developed to describe the variation of dielectric properties of tissues as a function of frequency. The experimental spectrum from 10 Hz to 100 GHz was modelled with four dispersion regions. The development of the model was based on recently acquired data, complemented by data surveyed from the literature. The purpose is to enable the prediction of dielectric data that are in line with those contained in the vast body of literature on the subject. The analysis was carried out on a Microsoft Excel spreadsheet. Parameters are given for 17 tissue types.",
"title": ""
}
] | [
{
"docid": "53dc606897bd6388c729cc8138027b31",
"text": "Abstract|This paper presents transient stability and power ow models of Thyristor Controlled Reactor (TCR) and Voltage Sourced Inverter (VSI) based Flexible AC Transmission System (FACTS) Controllers. Models of the Static VAr Compensator (SVC), the Thyristor Controlled Series Compensator (TCSC), the Static VAr Compensator (STATCOM), the Static Synchronous Source Series Compensator (SSSC), and the Uni ed Power Flow Controller (UPFC) appropriate for voltage and angle stability studies are discussed in detail. Validation procedures obtained for a test system with a detailed as well as a simpli ed UPFC model are also presented and brie y discussed.",
"title": ""
},
{
"docid": "6ad07075bdeff6e662b3259ba39635be",
"text": "We discuss a new deblurring problems in this paper. Focus measurements play a fundamental role in image processing techniques. Most traditional methods neglect spatial information in the frequency domain. Therefore, this study analyzed image data in the frequency domain to determine the value of spatial information. but instead misleading noise reduction results . We found that the local feature is not always a guide for noise reduction. This finding leads to a new method to measure the image edges in focus deblurring. We employed an all-in-focus measure in the frequency domain, based on the energy level of frequency components. We also used a multi-circle enhancement model to analyze this spatial information to provide a more accurate method for measuring images. We compared our results with those using other methods in similar studies. Findings demonstrate the effectiveness of our new method.",
"title": ""
},
{
"docid": "e6c3326af0af36a1197b08e7d2435041",
"text": "HUMAN speech requires complex planning and coordination of mouth and tongue movements. Certain types of brain injury can lead to a condition known as apraxia of speech, in which patients are impaired in their ability to coordinate speech movements but their ability to perceive speech sounds, including their own errors, is unaffected1,3. The brain regions involved in coordinating speech, however, remain largely unknown. In this study, brain lesions of 25 stroke patients with a disorder in the motor planning of articulatory movements were compared with lesions of 19 patients without such deficits. A robust double dissociation was found between these two groups. All patients with articulatory planning deficits had lesions that included a discrete region of the left precentral gyms of the insula, a cortical area beneath the frontal and temporal lobes. This area was completely spared in all patients without these articulation deficits. Thus this area seems to be specialized for the motor planning of speech.",
"title": ""
},
{
"docid": "89eee86640807e11fa02d0de4862b3a5",
"text": "The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.",
"title": ""
},
{
"docid": "69cdb1c8a277c69a167a9a98a52d407c",
"text": "ABSTRACT This paper describes methodologies applied and results achieved in the framework of the ESPRIT Basic Research Action B-Learn II (project no. 7274). B-Learn II is one of the rst projects working towards an application of Machine Learning techniques in elds of industrial relevance, which are much more complex than the domains usually treated in ML research. In particular, B-Learn II aims at easing the programming of robots and enhancing their ability to cooperate with humans. The paper gives a short introduction to learning in robotics and to the three applications under consideration in B-Learn II. Afterwards, learning methodologies used in each of the applications, the experimental setups, and the results obtained are described. In general, it can be found that providing good examples and a good interface between the learning and the performance components is crucial for success, so the extension of the \"Programming by Demonstration\" paradigm to robotics has become one of the key aspects of B-Learn II.",
"title": ""
},
{
"docid": "562bce85b8bb43390b87817be4da8cb3",
"text": "Variational autoencoders (vaes) learn distributions of high-dimensional data. They model data with a deep latent-variable model and then fit the model by maximizing a lower bound of the log marginal likelihood. vaes can capture complex distributions, but they can also suffer from an issue known as \"latent variable collapse,\" especially if the likelihood model is powerful. Specifically, the lower bound involves an approximate posterior of the latent variables; this posterior \"collapses\" when it is set equal to the prior, i.e., when the approximate posterior is independent of the data.Whilevaes learn good generativemodels, latent variable collapse prevents them from learning useful representations. In this paper, we propose a simple new way to avoid latent variable collapse by including skip connections in our generative model; these connections enforce strong links between the latent variables and the likelihood function. We study generative skip models both theoretically and empirically. Theoretically, we prove that skip models increase the mutual information between the observations and the inferred latent variables. Empirically, we study images (MNIST and Omniglot) and text (Yahoo). Compared to existing VAE architectures, we show that generative skip models maintain similar predictive performance but lead to less collapse and provide more meaningful representations of the data.",
"title": ""
},
{
"docid": "ea84c28e02a38caff14683681ea264d7",
"text": "This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression. While local anomaly is typically detected as a 3D pattern matching problem, we are more interested in global anomaly that involves multiple normal events interacting in an unusual manner such as car accident. To simultaneously detect local and global anomalies, we formulate the extraction of normal interactions from training video as the problem of efficiently finding the frequent geometric relations of the nearby sparse spatio-temporal interest points. A codebook of interaction templates is then constructed and modeled using Gaussian process regression. A novel inference method for computing the likelihood of an observed interaction is also proposed. As such, our model is robust to slight topological deformations and can handle the noise and data unbalance problems in the training data. Simulations show that our system outperforms the main state-of-the-art methods on this topic and achieves at least 80% detection rates based on three challenging datasets.",
"title": ""
},
{
"docid": "5aa20cb4100085a12d02c6789ad44097",
"text": "The rapid progress in nanoelectronics showed an urgent need for microwave measurement of impedances extremely different from the 50Ω reference impedance of measurement instruments. In commonly used methods input impedance or admittance of a device under test (DUT) is derived from measured value of its reflection coefficient causing serious accuracy problems for very high and very low impedances due to insufficient sensitivity of the reflection coefficient to impedance of the DUT. This paper brings theoretical description and experimental verification of a method developed especially for measurement of extreme impedances. The method can significantly improve measurement sensitivity and reduce errors caused by the VNA. It is based on subtraction (or addition) of a reference reflection coefficient and the reflection coefficient of the DUT by a passive network, amplifying the resulting signal by an amplifier and measuring the amplified signal as a transmission coefficient by a common vector network analyzer (VNA). A suitable calibration technique is also presented.",
"title": ""
},
{
"docid": "03e267aeeef5c59aab348775d264afce",
"text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate ≈ object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu’s multi-modal model with language priors [27].",
"title": ""
},
{
"docid": "15ad5044900511277e0cd602b0c07c5e",
"text": "Intentional facial expression of emotion is critical to healthy social interactions. Patients with neurodegenerative disease, particularly those with right temporal or prefrontal atrophy, show dramatic socioemotional impairment. This was an exploratory study examining the neural and behavioral correlates of intentional facial expression of emotion in neurodegenerative disease patients and healthy controls. One hundred and thirty three participants (45 Alzheimer's disease, 16 behavioral variant frontotemporal dementia, 8 non-fluent primary progressive aphasia, 10 progressive supranuclear palsy, 11 right-temporal frontotemporal dementia, 9 semantic variant primary progressive aphasia patients and 34 healthy controls) were video recorded while imitating static images of emotional faces and producing emotional expressions based on verbal command; the accuracy of their expression was rated by blinded raters. Participants also underwent face-to-face socioemotional testing and informants described participants' typical socioemotional behavior. Patients' performance on emotion expression tasks was correlated with gray matter volume using voxel-based morphometry (VBM) across the entire sample. We found that intentional emotional imitation scores were related to fundamental socioemotional deficits; patients with known socioemotional deficits performed worse than controls on intentional emotion imitation; and intentional emotional expression predicted caregiver ratings of empathy and interpersonal warmth. Whole brain VBMs revealed a rightward cortical atrophy pattern homologous to the left lateralized speech production network was associated with intentional emotional imitation deficits. Results point to a possible neural mechanisms underlying complex socioemotional communication deficits in neurodegenerative disease patients.",
"title": ""
},
{
"docid": "63a3126fb97982e6d52265ae3d07c0cc",
"text": "This work complements our previous efforts in generating realistic fingerprint images for test purposes. The main variability which characterizes the acquisition of a fingerprint through an on-line sensor is modeled and a sequence of steps is defined to derive a series of impressions from the same master-fingerprint. This allows large fingerprint databases to be randomly generated according to some given parameters. The experimental results validate our technique and prove that it can be very useful for performance evaluation, learning and testing in fingerprint-based systems.",
"title": ""
},
{
"docid": "c61107e9c5213ddb8c5e3b1b14dca661",
"text": "In advanced driving assistance systems, it is important to be able to detect the region covered by the road in the images. This paper proposes a method for estimating the road region in images captured by a vehicle-mounted monocular camera. Our proposed method first estimates all of relevant parameters for the camera motion and the 3D road plane from correspondence points between successive images. By calculating a homography matrix from the estimated camera motion and the estimated road plane parameters, and then warping the image at the previous frame, the road region can be determined. To achieve robustness in various road scenes, our method selects the threshold for determining the road region adaptively and incorporates the assumption of a simple road region boundary. In our experiments, it has been shown that the proposed method is able to estimate the road region in real road environments.",
"title": ""
},
{
"docid": "cc12bd6dcd844c49c55f4292703a241b",
"text": "Eleven cases of sudden death of men restrained in a prone position by police officers are reported. Nine of the men were hogtied, one was tied to a hospital gurney, and one was manually held prone. All subjects were in an excited delirious state when restrained. Three were psychotic, whereas the others were acutely delirious from drugs (six from cocaine, one from methamphetamine, and one from LSD). Two were shocked with stun guns shortly before death. The literature is reviewed and mechanisms of death are discussed.",
"title": ""
},
{
"docid": "61f079cb59505d9bf1de914330dd852e",
"text": "Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9 percent. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, tokensequence, SBPH, and Markovian ddiscriminators. The results deomonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. MIT Spam Conference 2004 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2004 201 Broadway, Cambridge, Massachusetts 02139 The Spam-Filtering Accuracy Plateau at 99.9% Accuracy and How to Get Past It. William S. Yerazunis, PhD* Presented at the 2004 MIT Spam Conference January 18, 2004 MIT, Cambridge, Massachusetts Abstract: Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed.",
"title": ""
},
{
"docid": "e016d5fc261def252f819f350b155c1a",
"text": "Risk reduction is one of the key objectives pursued by transport safety policies. Particularly, the formulation and implementation of transport safety policies needs the systematic assessment of the risks, the specification of residual risk targets and the monitoring of progresses towards those ones. Risk and safety have always been considered critical in civil aviation. The purpose of this paper is to describe and analyse safety aspects in civil airports. An increase in airport capacity usually involves changes to runways layout, route structures and traffic distribution, which in turn effect the risk level around the airport. For these reasons third party risk becomes an important issue in airports development. To avoid subjective interpretations and to increase model accuracy, risk information are colleted and evaluated in a rational and mathematical manner. The method may be used to draw risk contour maps so to provide a guide to local and national authorities, to population who live around the airport, and to airports operators. Key-Words: Risk Management, Risk assessment methodology, Safety Civil aviation.",
"title": ""
},
{
"docid": "1b812ef6c607790a0dbcf5e050871fc2",
"text": "This paper introduces Adaptive Music for Affect Improvement (AMAI), a music generation and playback system whose goal is to steer the listener towards a state of more positive affect. AMAI utilizes techniques from game music in order to adjust elements of the music being heard; such adjustments are made adaptively in response to the valence levels of the listener as measured via facial expression and emotion detection. A user study involving AMAI was conducted, with N=19 participants across three groups, one for each strategy of Discharge, Diversion, and Discharge→ Diversion. Significant differences in valence levels between music-related stages of the study were found between the three groups, with Discharge → Diversion exhibiting the greatest increase in valence, followed by Diversion and finally Discharge. Significant differences in positive affect between groups were also found in one before-music and after-music pair of self-reported affect surveys, with Discharge→ Diversion exhibiting the greatest decrease in positive affect, followed by Diversion and finally Discharge; the resulting differences in facial expression valence and self-reported affect offer contrasting con-",
"title": ""
},
{
"docid": "f193757e5ce1e1da8d28bf57175cc7cb",
"text": "Tim Bailey Doctor of Philosophy The University of Sydney August 2002 Mobile Robot Localisation and Mapping in Extensive Outdoor Environments This thesis addresses the issues of scale for practical implementations of simultaneous localisation and mapping (SLAM) in extensive outdoor environments. Building an incremental map while also using it for localisation is of prime importance for mobile robot navigation but, until recently, has been confined to small-scale, mostly indoor, environments. The critical problems for large-scale implementations are as follows. First, data association— finding correspondences between map landmarks and robot sensor measurements—becomes difficult in complex, cluttered environments, especially if the robot location is uncertain. Second, the information required to maintain a consistent map using traditional methods imposes a prohibitive computational burden as the map increases in size. And third, the mathematics for SLAM relies on assumptions of small errors and near-linearity, and these become invalid for larger maps. In outdoor environments, the problems of scale are exacerbated by complex structure and rugged terrain. This can impede the detection of stable discrete landmarks, and can degrade the utility of motion estimates derived from wheel-encoder odometry. This thesis presents the following contributions for large-scale SLAM. First, a batch data association method called combined constraint data association (CCDA) is developed, which permits robust association in cluttered environments even if the robot pose is completely unknown. Second, an alternative to feature-based data association is presented, based on correlation of unprocessed sensor data with the map, for environments that don’t contain easily detectable discrete landmarks. Third, methods for feature management are presented to control the addition and removal of map landmarks, which facilitates map reliability and reduces computation. Fourth, a new map framework called network coupled feature maps (NCFM) is introduced, where the world is divided into a graph of connected submaps. This map framework is shown to solve the problems of consistency and tractability for very large-scale SLAM. The theoretical contributions of this thesis are demonstrated with a series of practical implementations using a scanning range laser in three different outdoor environments. These include: sensor-based dead reckoning, which is a highly accurate alternative to odometry for rough terrain; correlation-based localisation using particle filter methods; and NCFM SLAM over a region greater than 50000 square metres, and including trajectories with large loops.",
"title": ""
},
{
"docid": "270e593aa89fb034d0de977fe6d618b2",
"text": "According to the website AcronymFinder.com which is one of the world's largest and most comprehensive dictionaries of acronyms, an average of 37 new human-edited acronym definitions are added every day. There are 379,918 acronyms with 4,766,899 definitions on that site up to now, and each acronym has 12.5 definitions on average. It is a very important research topic to identify what exactly an acronym means in a given context for document comprehension as well as for document retrieval. In this paper, we propose two word embedding based models for acronym disambiguation. Word embedding is to represent words in a continuous and multidimensional vector space, so that it is easy to calculate the semantic similarity between words by calculating the vector distance. We evaluate the models on MSH Dataset and ScienceWISE Dataset, and both models outperform the state-of-art methods on accuracy. The experimental results show that word embedding helps to improve acronym disambiguation.",
"title": ""
},
{
"docid": "4e938aed527769ad65d85bba48151d21",
"text": "We provide a thorough description of all the artifacts that are generated by the messenger application Telegram on Android OS. We also provide interpretation of messages that are generated and how they relate to one another. Based on the results of digital forensics investigation and analysis in this paper, an analyst/investigator will be able to read, reconstruct and provide chronological explanations of messages which are generated by the user. Using three different smartphone device vendors and Android OS versions as the objects of our experiments, we conducted tests in a forensically sound manner.",
"title": ""
}
] | scidocsrr |
b8d4821e7398675fb93265e3ed8ba517 | PoseShop: Human Image Database Construction and Personalized Content Synthesis | [
{
"docid": "5cfc4911a59193061ab55c2ce5013272",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
}
] | [
{
"docid": "000bdac12cd4254500e22b92b1906174",
"text": "In this paper we address the topic of generating automatically accurate, meaning preserving and syntactically correct paraphrases of natural language sentences. The design of methods and tools for paraphrasing natural language text is a core task of natural language processing and is quite useful in many applications and procedures. We present a methodology and a tool developed that performs deep analysis of natural language sentences and generate paraphrases of them. The tool performs deep analysis of the natural language sentence and utilizes sets of paraphrasing techniques that can be used to transform structural parts of the dependency tree of a sentence to an equivalent form and also change sentence words with their synonyms and antonyms. In the evaluation study the performance of the method is examined and the accuracy of the techniques is assessed in terms of syntactic correctness and meaning preserving. The results collected are very promising and show the method to be accurate and able to generate quality paraphrases.",
"title": ""
},
{
"docid": "d8d0b6d8b422b8d1369e99ff8b9dee0e",
"text": "The advent of massive open online courses (MOOCs) poses new learning opportunities for learners as well as challenges for researchers and designers. MOOC students approach MOOCs in a range of fashions, based on their learning goals and preferred approaches, which creates new opportunities for learners but makes it difficult for researchers to figure out what a student’s behavior means, and makes it difficult for designers to develop MOOCs appropriate for all of their learners. Towards better understanding the learners who take MOOCs, we conduct a survey of MOOC learners’ motivations and correlate it to which students complete the course according to the pace set by the instructor/platform (which necessitates having the goal of completing the course, as well as succeeding in that goal). The results showed that course completers tend to be more interested in the course content, whereas non-completers tend to be more interested in MOOCs as a type of learning experience. Contrary to initial hypotheses, however, no substantial differences in mastery-goal orientation or general academic efficacy were observed between completers and non-completers. However, students who complete the course tend to have more self-efficacy for their ability to complete the course, from the beginning.",
"title": ""
},
{
"docid": "b79fb02d0b89d288b1733c3194e304ec",
"text": "In this paper, the idea of a Prepaid energy meter using an AT89S52 microcontroller has been introduced. This concept provides a cost efficient manner of electricity billing. The present energy billing systems are discrete, inaccurate, costly and slow. They are also time and labour consuming. The major drawback of traditional billing system is power and energy theft. This drawback is reduced by using a prepaid energy meter which is based on the concept “Pay first and then use it”. Prepaid energy meter also reduces the error made by humans while taking readings to a large extent and there is no need to take reading in it. The prepaid energy meter uses a recharge card which is available in various ranges (i.e. Rs. 50, Rs. 100, Rs. 200, etc.). The recharge is done by using a keypad and the meter is charged with the amount. According to the power consumption, the amount will be reduced. An LDR (light Dependant Resistor) circuit counts the amount of energy consumed and displays the remaining amount of energy on the LCD. A relay system has been used which shut down or disconnect the energy meter and load through supply mains when the recharge amount is depleted. A buzzer is used as an alarm which starts before the recharge amount reaches a minimum value.",
"title": ""
},
{
"docid": "60edfab6fa5f127dd51a015b20d12a68",
"text": "We discuss the ethical implications of Natural Language Generation systems. We use one particular system as a case study to identify and classify issues, and we provide an ethics checklist, in the hope that future system designers may benefit from conducting their own ethics reviews based on our checklist.",
"title": ""
},
{
"docid": "35aa75f5bd79c8d97e374c33f5bad615",
"text": "Historically, much attention has been given to the unit processes and the integration of those unit processes to improve product yield. Less attention has been given to the wafer environment, either during or post processing. This paper contains a detailed discussion on how particles and Airborne Molecular Contaminants (AMCs) from the wafer environment interact and produce undesired effects on the wafer. Sources of wafer environmental contamination are the process itself, ambient environment, outgassing from wafers, and FOUP contamination. Establishing a strategy that reduces contamination inside the FOUP will increase yield and decrease defect variability. Three primary variables that greatly impact this strategy are FOUP contamination mitigation, FOUP material, and FOUP metrology and cleaning method.",
"title": ""
},
{
"docid": "034bf47c5982756a1cf1c1ccd777d604",
"text": "We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.",
"title": ""
},
{
"docid": "4b2b199aeb61128cbee7691bc49e16f5",
"text": "Although deep learning approaches have achieved performance surpassing humans for still image-based face recognition, unconstrained video-based face recognition is still a challenging task due to large volume of data to be processed and intra/inter-video variations on pose, illumination, occlusion, scene, blur, video quality, etc. In this work, we consider challenging scenarios for unconstrained video-based face recognition from multiple-shot videos and surveillance videos with low-quality frames. To handle these problems, we propose a robust and efficient system for unconstrained video-based face recognition, which is composed of face/fiducial detection, face association, and face recognition. First, we use multi-scale single-shot face detectors to efficiently localize faces in videos. The detected faces are then grouped respectively through carefully designed face association methods, especially for multi-shot videos. Finally, the faces are recognized by the proposed face matcher based on an unsupervised subspace learning approach and a subspace-tosubspace similarity metric. Extensive experiments on challenging video datasets, such as Multiple Biometric Grand Challenge (MBGC), Face and Ocular Challenge Series (FOCS), JANUS Challenge Set 6 (CS6) for low-quality surveillance videos and IARPA JANUS Benchmark B (IJB-B) for multiple-shot videos, demonstrate that the proposed system can accurately detect and associate faces from unconstrained videos and effectively learn robust and discriminative features for recognition.",
"title": ""
},
{
"docid": "ca9da9f8113bc50aaa79d654a9eaf95a",
"text": "Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard networks, and less restrictions on the geometry of the decision boundaries. Consistency results are proved, and substantial numerical evidence is provided on both synthetic and real data sets to assess the excellent performance of our methods in a large variety of prediction problems. Index Terms — Random forests, neural networks, ensemble methods, randomization, sparse networks. 2010 Mathematics Subject Classification: 62G08, 62G20, 68T05.",
"title": ""
},
{
"docid": "bbe59dd74c554d92167f42701a1f8c3d",
"text": "Finding subgraph isomorphisms is an important problem in many applications which deal with data modeled as graphs. While this problem is NP-hard, in recent years, many algorithms have been proposed to solve it in a reasonable time for real datasets using different join orders, pruning rules, and auxiliary neighborhood information. However, since they have not been empirically compared one another in most research work, it is not clear whether the later work outperforms the earlier work. Another problem is that reported comparisons were often done using the original authors’ binaries which were written in different programming environments. In this paper, we address these serious problems by re-implementing five state-of-the-art subgraph isomorphism algorithms in a common code base and by comparing them using many real-world datasets and their query loads. Through our in-depth analysis of experimental results, we report surprising empirical findings.",
"title": ""
},
{
"docid": "80af9f789b334aae324b549fffe4511a",
"text": "The research community is interested in developing automatic systems for the detection of events in video. This is particularly important in the field of sports data analytics. This paper presents an approach for identifying major complex events in soccer videos, starting from object detection and spatial relations between objects. The proposed framework, firstly, detects objects from each single video frame providing a set of candidate objects with associated confidence scores. The event detection system, then, detects events by means of rules which are based on temporal and logical combinations of the detected objects and their relative distances. The effectiveness of the framework is preliminary demonstrated over different events like \"Ball possession\" and \"Kicking the ball\".",
"title": ""
},
{
"docid": "17ec5256082713e85c819bb0a0dd3453",
"text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.",
"title": ""
},
{
"docid": "a2376c57c3c1c51f57f84788f4c6669f",
"text": "Text categorization is a significant tool to manage and organize the surging text data. Many text categorization algorithms have been explored in previous literatures, such as KNN, Naïve Bayes and Support Vector Machine. KNN text categorization is an effective but less efficient classification method. In this paper, we propose an improved KNN algorithm for text categorization, which builds the classification model by combining constrained one pass clustering algorithm and KNN text categorization. Empirical results on three benchmark corpuses show that our algorithm can reduce the text similarity computation substantially and outperform the-state-of-the-art KNN, Naïve Bayes and Support Vector Machine classifiers. In addition, the classification model constructed by the proposed algorithm can be updated incrementally, and it is valuable in practical application.",
"title": ""
},
{
"docid": "cc06553e4d03bf8541597d01de4d5eae",
"text": "Several technologies are used today to improve safety in transportation systems. The development of a system for drivability based on both V2V and V2I communication is considered an important task for the future. V2X communication will be a next step for the transportation safety in the nearest time. A lot of different structures, architectures and communication technologies for V2I based systems are under development. Recently a global paradigm shift known as the Internet-of-Things (IoT) appeared and its integration with V2I communication could increase the safety of future transportation systems. This paper brushes up on the state-of-the-art of systems based on V2X communications and proposes an approach for system architecture design of a safe intelligent driver assistant system using IoT communication. In particular, the paper presents the design process of the system architecture using IDEF modeling methodology and data flows investigations. The proposed approach shows the system design based on IoT architecture reference model.",
"title": ""
},
{
"docid": "db857ce571add6808493f64d9e254655",
"text": "(MANETs). MANET is a temporary network with a group of wireless infrastructureless mobile nodes that communicate with each other within a rapidly dynamic topology. The FMLB protocol distributes transmitted packets over multiple paths through the mobile nodes using Fibonacci sequence. Such distribution can increase the delivery ratio since it reduces the congestion. The FMLB protocol's responsibility is balancing the packet transmission over the selected paths and ordering them according to hops count. The shortest path is used frequently more than other ones. The simulation results show that the proposed protocol has achieved an enhancement on packet delivery ratio, up to 21%, as compared to the Ad Hoc On-demand Distance Vector routing protocol (AODV) protocol. Also the results show the effect of nodes pause time on the data delivery. Finally, the simulation results are obtained by the well-known Glomosim Simulator, version 2.03, without any distance or location measurements devices.",
"title": ""
},
{
"docid": "5552216832bb7315383d1c4f2bfe0635",
"text": "Semantic parsing maps sentences to formal meaning representations, enabling question answering, natural language interfaces, and many other applications. However, there is no agreement on what the meaning representation should be, and constructing a sufficiently large corpus of sentence-meaning pairs for learning is extremely challenging. In this paper, we argue that both of these problems can be avoided if we adopt a new notion of semantics. For this, we take advantage of symmetry group theory, a highly developed area of mathematics concerned with transformations of a structure that preserve its key properties. We define a symmetry of a sentence as a syntactic transformation that preserves its meaning. Semantically parsing a sentence then consists of inferring its most probable orbit under the language’s symmetry group, i.e., the set of sentences that it can be transformed into by symmetries in the group. The orbit is an implicit representation of a sentence’s meaning that suffices for most applications. Learning a semantic parser consists of discovering likely symmetries of the language (e.g., paraphrases) from a corpus of sentence pairs with the same meaning. Once discovered, symmetries can be composed in a wide variety of ways, potentially resulting in an unprecedented degree of immunity to syntactic variation.",
"title": ""
},
{
"docid": "cea53ea6ff16808a2dbc8680d3ef88ee",
"text": "Applying deep reinforcement learning (RL) on real systems suffers from slow data sampling. We propose an enhanced generative adversarial network (EGAN) to initialize an RL agent in order to achieve faster learning. The EGAN utilizes the relation between states and actions to enhance the quality of data samples generated by a GAN. Pre-training the agent with the EGAN shows a steeper learning curve with a 20% improvement of training time in the beginning of learning, compared to no pre-training, and an improvement compared to training with GAN by about 5% with smaller variations. For real time systems with sparse and slow data sampling the EGAN could be used to speed up the early phases of the training process.",
"title": ""
},
{
"docid": "a90dd405d9bd2ed912cacee098c0f9db",
"text": "Many telecommunication companies today have actively started to transform the way they do business, going beyond communication infrastructure providers are repositioning themselves as data-driven service providers to create new revenue streams. In this paper, we present a novel industrial application where a scalable Big data approach combined with deep learning is used successfully to classify massive mobile web log data, to get new aggregated insights on customer web behaviors that could be applied to various industry verticals.",
"title": ""
},
{
"docid": "0952701dd63326f8a78eb5bc9a62223f",
"text": "The self-organizing map (SOM) is an automatic data-analysis method. It is widely applied to clustering problems and data exploration in industry, finance, natural sciences, and linguistics. The most extensive applications, exemplified in this paper, can be found in the management of massive textual databases and in bioinformatics. The SOM is related to the classical vector quantization (VQ), which is used extensively in digital signal processing and transmission. Like in VQ, the SOM represents a distribution of input data items using a finite set of models. In the SOM, however, these models are automatically associated with the nodes of a regular (usually two-dimensional) grid in an orderly fashion such that more similar models become automatically associated with nodes that are adjacent in the grid, whereas less similar models are situated farther away from each other in the grid. This organization, a kind of similarity diagram of the models, makes it possible to obtain an insight into the topographic relationships of data, especially of high-dimensional data items. If the data items belong to certain predetermined classes, the models (and the nodes) can be calibrated according to these classes. An unknown input item is then classified according to that node, the model of which is most similar with it in some metric used in the construction of the SOM. A new finding introduced in this paper is that an input item can even more accurately be represented by a linear mixture of a few best-matching models. This becomes possible by a least-squares fitting procedure where the coefficients in the linear mixture of models are constrained to nonnegative values.",
"title": ""
},
{
"docid": "154f5455f593e8ebf7058cc0a32426a2",
"text": "Many life-log analysis applications, which transfer data from cameras and sensors to a Cloud and analyze them in the Cloud, have been developed with the spread of various sensors and Cloud computing technologies. However, difficulties arise because of the limitation of the network bandwidth between the sensors and the Cloud. In addition, sending raw sensor data to a Cloud may introduce privacy issues. Therefore, we propose distributed deep learning processing between sensors and the Cloud in a pipeline manner to reduce the amount of data sent to the Cloud and protect the privacy of the users. In this paper, we have developed a pipeline-based distributed processing method for the Caffe deep learning framework and investigated the processing times of the classification by varying a division point and the parameters of the network models using data sets, CIFAR-10 and ImageNet. The experiments show that the accuracy of deep learning with coarse-grain data is comparable to that with the default parameter settings, and the proposed distributed processing method has performance advantages in cases of insufficient network bandwidth with actual sensors and a Cloud environment.",
"title": ""
},
{
"docid": "11ddbce61cb175e9779e0fcb5622436f",
"text": "When rewards are sparse and efficient exploration essential, deep Q-learning with -greedy exploration tends to fail. This poses problems for otherwise promising domains such as task-oriented dialog systems, where the primary reward signal, indicating successful completion, typically occurs only at the end of each episode but depends on the entire sequence of utterances. A poor agent encounters such successful dialogs rarely, and a random agent may never stumble upon a successful outcome in reasonable time. We present two techniques that significantly improve the efficiency of exploration for deep Q-learning agents in dialog systems. First, we demonstrate that exploration by Thompson sampling, using Monte Carlo samples from a Bayes-by-Backprop neural network, yields marked improvement over standard DQNs with Boltzmann or -greedy exploration. Second, we show that spiking the replay buffer with a small number of successes, as are easy to harvest for dialog tasks, can make Q-learning feasible when it might otherwise fail catastrophically.",
"title": ""
}
] | scidocsrr |
e11b4608b217a3f5a0935eb948bf582b | Revisiting Android reuse studies in the context of code obfuscation and library usages | [
{
"docid": "8d9f5f5569f0281765a60705c7e9c752",
"text": "Software repositories hold applications that are often categorized to improve the effectiveness of various maintenance tasks. Properly categorized applications allow stakeholders to identify requirements related to their applications and predict maintenance problems in software projects. Manual categorization is expensive, tedious, and laborious – this is why automatic categorization approaches are gaining widespread importance. Unfortunately, for different legal and organizational reasons, the applications’ source code is often not available, thus making it difficult to automatically categorize these applications. In this paper, we propose a novel approach in which we use Application Programming Interface (API) calls from third-party libraries for automatic categorization of software applications that use these API calls. Our approach is general since it enables different categorization algorithms to be applied to repositories that contain both source code and bytecode of applications, since API calls can be extracted from both the source code and byte-code. We compare our approach to a state-of-the-art approach that uses machine learning algorithms for software categorization, and conduct experiments on two large Java repositories: an open-source repository containing 3,286 projects and a closed-source repository with 745 applications, where the source code was not available. Our contribution is twofold: we propose a new approach that makes it possible to categorize software projects without any source code using a small number of API calls as attributes, and furthermore we carried out a comprehensive empirical evaluation of automatic categorization approaches.",
"title": ""
}
] | [
{
"docid": "aaa1ed7c041123e0f7a2f948fdbd9e1a",
"text": "The present study evaluated the venous anatomy of the craniocervical junction, focusing on the suboccipital cavernous sinus (SCS), a vertebral venous plexus surrounding the horizontal portion of the vertebral artery at the skull base. MR imaging was reviewed to clarify the venous anatomy of the SCS in 33 patients. Multiplanar reconstruction MR images were obtained using contrast-enhanced three-dimensional fast spoiled gradient–recalled acquisition in the steady state (3-D fast SPGR) with fat suppression. Connections with the SCS were evaluated for the following venous structures: anterior condylar vein (ACV); posterior condylar vein (PCV); lateral condylar vein (LCV); vertebral artery venous plexus (VAVP); and anterior internal vertebral venous plexus (AVVP). The SCS connected with the ACV superomedially, with the VAVP inferolaterally, and with the AVVP medially. The LCV connected with the external orifice of the ACV and superoanterior aspect of the SCS. The PCV connected with the posteromedial aspect of the jugular bulb and superoposterior aspect of the SCS. The findings of craniocervical junction venography performed in eight patients corresponded with those on MR imaging, other than with regard to the PCV. Contrast-enhanced 3-D fast SPGR allows visualization of the detailed anatomy of these venous structures, and this technique facilitates interventions and description of pathologies occurring in this area.",
"title": ""
},
{
"docid": "c460179cbdb40b9d89b3cc02276d54e1",
"text": "In recent years the sport of climbing has seen consistent increase in popularity. Climbing requires a complex skill set for successful and safe exercising. While elite climbers receive intensive expert coaching to refine this skill set, this progression approach is not viable for the amateur population. We have developed ClimbAX - a climbing performance analysis system that aims for replicating expert assessments and thus represents a first step towards an automatic coaching system for climbing enthusiasts. Through an accelerometer based wearable sensing platform, climber's movements are captured. An automatic analysis procedure detects climbing sessions and moves, which form the basis for subsequent performance assessment. The assessment parameters are derived from sports science literature and include: power, control, stability, speed. ClimbAX was evaluated in a large case study with 53 climbers under competition settings. We report a strong correlation between predicted scores and official competition results, which demonstrate the effectiveness of our automatic skill assessment system.",
"title": ""
},
{
"docid": "4b665ffb50963308818176d4277cfe71",
"text": "Aligning IT to business needs is still one of the most important challenges for many organizations. In a recent survey amongst European IT managers, 78% indicate that their IT is not aligned with business strategy. Another recent survey shows similar results. The message of Business & IT Alignment is logical and undisputed. But if this message is so clear, how can practice be so difficult? To explore the issues with and approaches to BITA in practice, a focused group discussion was organized with IT managers and CIOs of medium sized and large organizations in the Netherlands. In total 23 participants from trade, manufacturing and financial companies joined the discussions. This paper explores the practice of Business & IT Alignment in mult-business-companies. The parenting theory for the role of the corporate center is used to explain the different practical approaches that the participants in the focused groups took.",
"title": ""
},
{
"docid": "9adf653a332e07b8aa055b62449e1475",
"text": "False-belief task have mainly been associated with the explanatory notion of the theory of mind and the theory-theory. However, it has often been pointed out that this kind of highlevel reasoning is computational and time expensive. During the last decades, the idea of embodied intelligence, i.e. complex behavior caused by sensorimotor contingencies, has emerged in both the fields of neuroscience, psychology and artificial intelligence. Viewed from this perspective, the failing in a false-belief test can be the result of the impairment to recognize and track others’ sensorimotor contingencies and affordances. Thus, social cognition is explained in terms of lowlevel signals instead of high-level reasoning. In this work, we present a generative model for optimal action selection which simultaneously can be employed to make predictions of others’ actions. As we base the decision making on a hidden state representation of sensorimotor signals, this model is in line with the ideas of embodied intelligence. We demonstrate how the tracking of others’ hidden states can give rise to correct falsebelief inferences, while a lack thereof leads to failing. With this work, we want to emphasize the importance of sensorimotor contingencies in social cognition, which might be a key to artificial, socially intelligent systems.",
"title": ""
},
{
"docid": "da5c56f30c9c162eb80c418ba9dbc31a",
"text": "Text detection and recognition in a natural environment are key components of many applications, ranging from business card digitization to shop indexation in a street. This competition aims at assessing the ability of state-of-the-art methods to detect Multi-Lingual Text (MLT) in scene images, such as in contents gathered from the Internet media and in modern cities where multiple cultures live and communicate together. This competition is an extension of the Robust Reading Competition (RRC) which has been held since 2003 both in ICDAR and in an online context. The proposed competition is presented as a new challenge of the RRC. The dataset built for this challenge largely extends the previous RRC editions in many aspects: the multi-lingual text, the size of the dataset, the multi-oriented text, the wide variety of scenes. The dataset is comprised of 18,000 images which contain text belonging to 9 languages. The challenge is comprised of three tasks related to text detection and script classification. We have received a total of 16 participations from the research and industrial communities. This paper presents the dataset, the tasks and the findings of this RRC-MLT challenge.",
"title": ""
},
{
"docid": "3b0f2413234109c6df1b643b61dc510b",
"text": "Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what's happening.",
"title": ""
},
{
"docid": "c824c8bb8fd9b0b3f0f89df24e8f53d0",
"text": "Ovarian cysts are an extremely common gynecological problem in adolescent. Majority of ovarian cysts are benign with few cases being malignant. Ovarian serous cystadenoma are rare in children. A 14-year-old presented with abdominal pain and severe abdominal distention. She underwent laparotomy and after surgical removal, the mass was found to be ovarian serous cystadenoma on histology. In conclusions, germ cell tumors the most important causes for the giant ovarian masses in children. Epithelial tumors should not be forgotten in the differential diagnosis. Keyword: Adolescent; Ovarian Cysts/diagnosis*; Cystadenoma, Serous/surgery; Ovarian Neoplasms/surgery; Ovarian cystadenoma",
"title": ""
},
{
"docid": "0ff27e119ec045674b9111bb5a9e5d29",
"text": "Description: This book provides an introduction to the complex field of ubiquitous computing Ubiquitous Computing (also commonly referred to as Pervasive Computing) describes the ways in which current technological models, based upon three base designs: smart (mobile, wireless, service) devices, smart environments (of embedded system devices) and smart interaction (between devices), relate to and support a computing vision for a greater range of computer devices, used in a greater range of (human, ICT and physical) environments and activities. The author details the rich potential of ubiquitous computing, the challenges involved in making it a reality, and the prerequisite technological infrastructure. Additionally, the book discusses the application and convergence of several current major and future computing trends.-Provides an introduction to the complex field of ubiquitous computing-Describes how current technology models based upon six different technology form factors which have varying degrees of mobility wireless connectivity and service volatility: tabs, pads, boards, dust, skins and clay, enable the vision of ubiquitous computing-Describes and explores how the three core designs (smart devices, environments and interaction) based upon current technology models can be applied to, and can evolve to, support a vision of ubiquitous computing and computing for the future-Covers the principles of the following current technology models, including mobile wireless networks, service-oriented computing, human computer interaction, artificial intelligence, context-awareness, autonomous systems, micro-electromechanical systems, sensors, embedded controllers and robots-Covers a range of interactions, between two or more UbiCom devices, between devices and people (HCI), between devices and the physical world.-Includes an accompanying website with PowerPoint slides, problems and solutions, exercises, bibliography and further reading Graduate students in computer science, electrical engineering and telecommunications courses will find this a fascinating and useful introduction to the subject. It will also be of interest to ICT professionals, software and network developers and others interested in future trends and models of computing and interaction over the next decades.",
"title": ""
},
{
"docid": "461ec14463eb20962ef168de781ac2a2",
"text": "Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.",
"title": ""
},
{
"docid": "d18d67949bae399cdc148f2ded81903a",
"text": "Stock market news and investing tips are popular topics in Twitter. In this paper, first we utilize a 5-year financial news corpus comprising over 50,000 articles collected from the NASDAQ website for the 30 stock symbols in Dow Jones Index (DJI) to train a directional stock price prediction system based on news content. Then we proceed to prove that information in articles indicated by breaking Tweet volumes leads to a statistically significant boost in the hourly directional prediction accuracies for the prices of DJI stocks mentioned in these articles. Secondly, we show that using document-level sentiment extraction does not yield to a statistically significant boost in the directional predictive accuracies in the presence of other 1-gram keyword features.",
"title": ""
},
{
"docid": "8c91fe2785ae0a7b907315ae52d9a905",
"text": "A method for the determination of pixel correspondence in stereo image pairs is presented. The optic ̄ow vectors that result from the displacement of the point of projection are obtained and the correspondence between pixels of the various objects in the scene is derived from the optic ̄ow vectors. The proposed algorithm is implemented and the correspondence vectors are obtained. Various specialized improvements of the method are implemented and thoroughly tested on sequences of image pairs giving rise to interesting conclusions. The algorithm is highly-parallelizable and therefore suitable for real-time applications.",
"title": ""
},
{
"docid": "18c507d6624f153cb1b7beaf503b0d54",
"text": "The critical period hypothesis for language acquisition (CP) proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. The CP hypothesis was originally proposed for spoken language but recent research has shown that it applies equally to sign language. This paper summarizes a series of experiments designed to investigate whether and how the CP affects the outcome of sign language acquisition. The results show that the CP has robust effects on the development of sign language comprehension. Effects are found at all levels of linguistic structure (phonology, morphology and syntax, the lexicon and semantics) and are greater for first as compared to second language acquisition. In addition, CP effects have been found on all measures of language comprehension examined to date, namely, working memory, narrative comprehension, sentence memory and interpretation, and on-line grammatical processing. The nature of these effects with respect to a model of language comprehension is discussed.",
"title": ""
},
{
"docid": "7a6691ce9d93b42179cd2ce954aeb8c5",
"text": "In this paper, a new dance training system based on the motion capture and virtual reality (VR) technologies is proposed. Our system is inspired by the traditional way to learn new movements-imitating the teacher's movements and listening to the teacher's feedback. A prototype of our proposed system is implemented, in which a student can imitate the motion demonstrated by a virtual teacher projected on the wall screen. Meanwhile, the student's motions will be captured and analyzed by the system based on which feedback is given back to them. The result of user studies showed that our system can successfully guide students to improve their skills. The subjects agreed that the system is interesting and can motivate them to learn.",
"title": ""
},
{
"docid": "83466fa7c291f6a21f6eedd4150043dc",
"text": "E-mail communication has become the need of the hour, with the advent of Internet. However, it is being abused for various illegitimate purposes, such as, spamming, drug trafficking, cyber bullying, phishing, racial vilification, child pornography, and sexual harassment, etc. Several cyber crimes such as identity theft, plagiarism, internet fraud stipulate that the true identity of the e-mail's author be revealed, so that the culprits can be punished in the court of law, by gathering credible evidence against them. Forensic analysis can play a crucial role here, by letting the forensic investigator to gather evidence by examining suspected e-mail accounts. In this context, automated authorship identification can assist the forensic investigator in cyber crime investigation. In this paper we discuss how existing state-of-the-art techniques have been employed for author identification of e-mails and we propose our model for identifying most plausible author of e-mails.",
"title": ""
},
{
"docid": "366800edb32efd098351bc711984854a",
"text": "Building credible Non-Playing Characters (NPCs) in games requires not only to enhance the graphic animation but also the behavioral model. This paper tackles the problem of the dynamics of NPCs social relations depending on their emotional interactions. First, we discuss the need for a dynamic model of social relations. Then, we present our model of social relations for NPCs and we give a qualitative model of the influence of emotions on social relations. We describe the implementation of this model and we briefly illustrate its features on a simple scene.",
"title": ""
},
{
"docid": "7cc9b6f1837d992b64071e2149e81a9a",
"text": "This article presents an application of Augmented Reality technology for interior design. Plus, an Educational Interior Design Project is reviewed. Along with the dramatic progress of digital technology, virtual information techniques are also required for architectural projects. Thus, the new technology of Augmented Reality offers many advantages for digital architectural design and construction fields. AR is also being considered as a new design approach for interior design. In an AR environment, the virtual furniture can be displayed and modified in real-time on the screen, allowing the user to have an interactive experience with the virtual furniture in a real-world environment. Here, AR environment is exploited as the new working environment for architects in architectural design works, and then they can do their work conveniently as such collaborative discussion through AR environment. Finally, this study proposes a new method for applying AR technology to interior design work, where a user can view virtual furniture and communicate with 3D virtual furniture data using a dynamic and flexible user interface. Plus, all the properties of the virtual furniture can be adjusted using occlusionbased interaction method for a Tangible Augmented Reality.",
"title": ""
},
{
"docid": "a6ba94c0faf2fd41d8b1bd5a068c6d3d",
"text": "The main mechanisms responsible for performance degradation of millimeter wave (mmWave) and terahertz (THz) on-chip antennas are reviewed. Several techniques to improve the performance of the antennas and several high efficiency antenna types are presented. In order to illustrate the effects of the chip topology on the antenna, simulations and measurements of mmWave and THz on-chip antennas are shown. Finally, different transceiver architectures are explored with emphasis on the challenges faced in a wireless multi-core environment.",
"title": ""
},
{
"docid": "79c80b3aea50ab971f405b8b58da38de",
"text": "In this paper, the design and implementation of small inductors in printed circuit board (PCB) for domestic induction heating applications is presented. With this purpose, we have developed both a manufacturing technique and an electromagnetic model of the system based on finite-element method (FEM) simulations. The inductor arrangement consists of a stack of printed circuit boards in which a planar litz wire structure is implemented. The developed PCB litz wire structure minimizes the losses in a similar way to the conventional multi-stranded litz wires; whereas the stack of PCBs allows increasing the power transferred to the pot. Different prototypes of the proposed PCB inductor have been measured at low signal levels. Finally, a PCB inductor has been integrated in an electronic stage to test at high signal levels, i.e. in the similar working conditions to the commercial application.",
"title": ""
},
{
"docid": "92a00453bc0c2115a8b37e5acc81f193",
"text": "Choosing the appropriate software development methodology is something which continues to occupy the minds of many IT professionals. The introduction of “Agile” development methodologies such as XP and SCRUM held the promise of improved software quality and reduced delivery times. Combined with a Lean philosophy, there would seem to be potential for much benefit. While evidence does exist to support many of the Lean/Agile claims, we look here at how such methodologies are being adopted in the rigorous environment of safety-critical embedded software development due to its high regulation. Drawing on the results of a systematic literature review we find that evidence is sparse for Lean/Agile adoption in these domains. However, where it has been trialled, “out-of-the-box” Agile practices do not seem to fully suit these environments but rather tailored Agile versions combined with more planbased practices seem to be making inroads.",
"title": ""
},
{
"docid": "135158b230016bb80a08b4c7e2c4f3f2",
"text": "Quite recently, two smart-card-based passwords authenticated key exchange protocols were proposed by Lee et al. and Hwang et al. respectively. However, neither of them achieves two-factor authentication fully since they would become completely insecure once one factor is broken. To overcome these congenital defects, this study proposes such a secure authenticated key exchange protocol that achieves fully two-factor authentication and provides forward security of session keys. And yet our scheme is simple and reasonably efficient. Furthermore, we can provide the rigorous proof of the security for it.",
"title": ""
}
] | scidocsrr |
6c71a1f3fd813d27efa4b205e5cb8dac | Advanced Demand Side Management for the Future Smart Grid Using Mechanism Design | [
{
"docid": "adec3b3578d56cefed73fd74d270ca22",
"text": "In the framework of liberalized electricity markets, distributed generation and controllable demand have the opportunity to participate in the real-time operation of transmission and distribution networks. This may be done by using the virtual power plant (VPP) concept, which consists of aggregating the capacity of many distributed energy resources (DER) in order to make them more accessible and manageable across energy markets. This paper provides an optimization algorithm to manage a VPP composed of a large number of customers with thermostatically controlled appliances. The algorithm, based on a direct load control (DLC), determines the optimal control schedules that an aggregator should apply to the controllable devices of the VPP in order to optimize load reduction over a specified control period. The results define the load reduction bid that the aggregator can present in the electricity market, thus helping to minimize network congestion and deviations between generation and demand. The proposed model, which is valid for both transmission and distribution networks, is tested on a real power system to demonstrate its applicability.",
"title": ""
}
] | [
{
"docid": "5f393e79895bf234c0b96b7ece0d1cae",
"text": "Energy consumption of routers in commonly used mesh-based on-chip networks for chip multiprocessors is an increasingly important concern: these routers consist of a crossbar and complex control logic and can require significant buffers, hence high energy and area consumption. In contrast, an alternative design uses ring-based networks to connect network nodes with small and simple routers. Rings have been used in recent commercial designs, and are well-suited to smaller core counts. However, rings do not scale as efficiently as meshes. In this paper, we propose an energy-efficient yet high performance alternative to traditional mesh-based and ringbased on-chip networks. We aim to attain the scalability of meshes with the router simplicity and efficiency of rings. Our design is a hierarchical ring topology which consists of small local rings connected via one or more global ring. Routing between rings is accomplished using bridge routers that have minimal buffering, and use deflection in place of buffered flow control for simplicity. We comprehensively explore new issues in the design of such a topology, including the design of the routers, livelock freedom, energy, performance and scalability. We propose new router microarchitectures and show that these routers are significantly simpler and more area and energy efficient than both buffered and bufferless mesh based routers. We develop new mechanisms to preserve livelock-free routing in our topology and router design. Our evaluations compare our proposal to a traditional ring network and conventional buffered and bufferless mesh based networks, showing that our proposal reduces average network power by 52.4% (30.4%) and router area footprint by 70.5% from a buffered mesh in 16-node (64-node) configurations, while also improving system performance by 0.6% (5.0%).",
"title": ""
},
{
"docid": "9a2ab1d198468819f32a2b74334528ae",
"text": "This paper introduces GeoSpark an in-memory cluster computing framework for processing large-scale spatial data. GeoSpark consists of three layers: Apache Spark Layer, Spatial RDD Layer and Spatial Query Processing Layer. Apache Spark Layer provides basic Spark functionalities that include loading / storing data to disk as well as regular RDD operations. Spatial RDD Layer consists of three novel Spatial Resilient Distributed Datasets (SRDDs) which extend regular Apache Spark RDDs to support geometrical and spatial objects. GeoSpark provides a geometrical operations library that accesses Spatial RDDs to perform basic geometrical operations (e.g., Overlap, Intersect). System users can leverage the newly defined SRDDs to effectively develop spatial data processing programs in Spark. The Spatial Query Processing Layer efficiently executes spatial query processing algorithms (e.g., Spatial Range, Join, KNN query) on SRDDs. GeoSpark also allows users to create a spatial index (e.g., R-tree, Quad-tree) that boosts spatial data processing performance in each SRDD partition. Preliminary experiments show that GeoSpark achieves better run time performance than its Hadoop-based counterparts (e.g., SpatialHadoop).",
"title": ""
},
{
"docid": "b27224825bb28b9b8d0eea37f8900d42",
"text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.",
"title": ""
},
{
"docid": "46768aeb3c9295a38ff64b3e40a34ec1",
"text": "Google's monolithic repository provides a common source of truth for tens of thousands of developers around the world.",
"title": ""
},
{
"docid": "09f743b18655305b7ad1e39432756525",
"text": "Several applications of chalcones and their derivatives encouraged researchers to increase their synthesis as an alternative for the treatment of pathogenic bacterial and fungal infections. In the present study, chalcone derivatives were synthesized through cross aldol condensation reaction between 4-(N,N-dimethylamino)benzaldehyde and multiarm aromatic ketones. The multiarm aromatic ketones were synthesized through nucleophilic substitution reaction between 4-hydroxy acetophenone and benzyl bromides. The benzyl bromides, multiarm aromatic ketones, and corresponding chalcone derivatives were evaluated for their activities against eleven clinical pathogenic Gram-positive, Gram-negative bacteria, and three pathogenic fungi by the disk diffusion method. The minimum inhibitory concentration was determined by the microbroth dilution technique. The results of the present study demonstrated that benzyl bromide derivatives have strong antibacterial and antifungal properties as compared to synthetic chalcone derivatives and ketones. Benzyl bromides (1a and 1c) showed high ester activity against Gram-positive bacteria and fungi but moderate activity against Gram-negative bacteria. Therefore, these compounds may be considered as good antibacterial and antifungal drug discovery. However, substituted ketones (2a-b) as well as chalcone derivatives (3a-c) showed no activity against all the tested strains except for ketone (2c), which showed moderate activity against Candida albicans.",
"title": ""
},
{
"docid": "d88523afba42431989f5d3bd22f2ad85",
"text": "The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. How to effectively integrate local and contextual visual cues from these regions has become a fundamental problem in object detection. Most existing works simply concatenated features or scores obtained from support regions. In this paper, we proposal a novel gated bi-directional CNN (GBD-Net) to pass messages between features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close iterations are modeled in a much more complex way. It is also shown that message passing is not always helpful depending on individual samples. Gated functions are further introduced to control message transmission and their on-and-off is controlled by extra visual evidence from the input sample. GBD-Net is implemented under the Fast RCNN detection framework. Its effectiveness is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO.",
"title": ""
},
{
"docid": "4bdccdda47aea04c5877587daa0e8118",
"text": "Recognizing text character from natural scene images is a challenging problem due to background interferences and multiple character patterns. Scene Text Character (STC) recognition, which generally includes feature representation to model character structure and multi-class classification to predict label and score of character class, mostly plays a significant role in word-level text recognition. The contribution of this paper is a complete performance evaluation of image-based STC recognition, by comparing different sampling methods, feature descriptors, dictionary sizes, coding and pooling schemes, and SVM kernels. We systematically analyze the impact of each option in the feature representation and classification. The evaluation results on two datasets CHARS74K and ICDAR2003 demonstrate that Histogram of Oriented Gradient (HOG) descriptor, soft-assignment coding, max pooling, and Chi-Square Support Vector Machines (SVM) obtain the best performance among local sampling based feature representations. To improve STC recognition, we apply global sampling feature representation. We generate Global HOG (GHOG) by computing HOG descriptor from global sampling. GHOG enables better character structure modeling and obtains better performance than local sampling based feature representations. The GHOG also outperforms existing methods in the two benchmark datasets.",
"title": ""
},
{
"docid": "dcda412c18e92650d9791023f13e4392",
"text": "Graph can straightforwardly represent the relations between the objects, which inevitably draws a lot of attention of both academia and industry. Achievements mainly concentrate on homogeneous graph and bipartite graph. However, it is difficult to use existing algorithm in actual scenarios. Because in the real world, the type of the objects and the relations are diverse and the amount of the data can be very huge. Considering of the characteristics of \"black market\", we proposeHGsuspector, a novel and scalable algorithm for detecting collective fraud in directed heterogeneous graphs.We first decompose directed heterogeneous graphs into a set of bipartite graphs, then we define a metric on each connected bipartite graph and calculate scores of it, which fuse the structure information and event probability. The threshold for distinguishing between normal and abnormal can be obtained by statistic or other anomaly detection algorithms in scores space. We also provide a technical solution for fraud detection in e-commerce scenario, which has been successfully applied in Jingdong e-commerce platform to detect collective fraud in real time. The experiments on real-world datasets, which has billion nodes and edges, demonstrate that HGsuspector is more accurate and fast than the most practical and state-of-the-art approach by far.",
"title": ""
},
{
"docid": "e6300989e5925d38d09446b3e43092e5",
"text": "Cloud computing provides resources as services in pay-as-you-go mode to customers by using virtualization technology. As virtual machine (VM) is hosted on physical server, great energy is consumed by maintaining the servers in data center. More physical servers means more energy consumption and more money cost. Therefore, the VM placement (VMP) problem is significant in cloud computing. This paper proposes an approach based on ant colony optimization (ACO) to solve the VMP problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers. The number of physical servers is the same as the number of the VMs at the beginning. Then the ACO approach tries to reduce the physical server one by one. We evaluate the performance of the proposed ACO-VMP approach in solving VMP with the number of VMs being up to 600. Experimental results compared with the ones obtained by the first-fit decreasing (FFD) algorithm show that ACO-VMP can solve VMP more efficiently to reduce the number of physical servers significantly, especially when the number of VMs is large.",
"title": ""
},
{
"docid": "af81774bce83971009c26fba730bfba3",
"text": "In this paper, we present a stereo visual-inertial odometry algorithm assembled with three separated Kalman filters, i.e., attitude filter, orientation filter, and position filter. Our algorithm carries out the orientation and position estimation with three filters working on different fusion intervals, which can provide more robustness even when the visual odometry estimation fails. In our orientation estimation, we propose an improved indirect Kalman filter, which uses the orientation error space represented by unit quaternion as the state of the filter. The performance of the algorithm is demonstrated through extensive experimental results, including the benchmark KITTI datasets and some challenging datasets captured in a rough terrain campus.",
"title": ""
},
{
"docid": "b776bf3acb830552eb1ecf353b08edee",
"text": "The size and high rate of change of source code comprising a software system make it difficult for software developers to keep up with who on the team knows about particular parts of the code. Existing approaches to this problem are based solely on authorship of code. In this paper, we present data from two professional software development teams to show that both authorship and interaction information about how a developer interacts with the code are important in characterizing a developer's knowledge of code. We introduce the degree-of-knowledge model that computes automatically a real value for each source code element based on both authorship and interaction information. We show that the degree-of-knowledge model can provide better results than an existing expertise finding approach and also report on case studies of the use of the model to support knowledge transfer and to identify changes of interest.",
"title": ""
},
{
"docid": "c3218724e6237c3d51eb41bed1cd5268",
"text": "Recently, wireless sensor networks (WSNs) have become mature enough to go beyond being simple fine-grained continuous monitoring platforms and become one of the enabling technologies for disaster early-warning systems. Event detection functionality of WSNs can be of great help and importance for (near) real-time detection of, for example, meteorological natural hazards and wild and residential fires. From the data-mining perspective, many real world events exhibit specific patterns, which can be detected by applying machine learning (ML) techniques. In this paper, we introduce ML techniques for distributed event detection in WSNs and evaluate their performance and applicability for early detection of disasters, specifically residential fires. To this end, we present a distributed event detection approach incorporating a novel reputation-based voting and the decision tree and evaluate its performance in terms of detection accuracy and time complexity.",
"title": ""
},
{
"docid": "8e8b199787fcc8bf813037fbc26d1be3",
"text": "Recent work on imitation learning has generated policies that reproduce expert behavior from multi-modal data. However, past approaches have focused only on recreating a small number of distinct, expert maneuvers, or have relied on supervised learning techniques that produce unstable policies. This work extends InfoGAIL, an algorithm for multi-modal imitation learning, to reproduce behavior over an extended period of time. Our approach involves reformulating the typical imitation learning setting to include “burn-in demonstrations” upon which policies are conditioned at test time. We demonstrate that our approach outperforms standard InfoGAIL in maximizing the mutual information between predicted and unseen style labels in road scene simulations, and we show that our method leads to policies that imitate expert autonomous driving systems over long time horizons.",
"title": ""
},
{
"docid": "8434630dc54c3015a50d04abba004aca",
"text": "Wolfram syndrome, also known by the mnemonic DIDMOAD (diabetes insipidus, diabetes mellitus, optic atrophy and deafness) is a rare progressive neurodegenerative disorder. This syndrome is further divided to WFS1 and WFS2 based on the different genetic molecular basis and clinical features. In this report, we described a known case of Wolfram syndrome requiring anesthesia for cochlear implantation. Moreover, a brief review of molecular genetics and anesthetic considerations are presented.",
"title": ""
},
{
"docid": "9f3e9e7c493b3b62c7ec257a00f43c20",
"text": "The wind stroke is a common syndrome in clinical disease; the physicians of past generations accumulated much experience in long-term clinical practice and left abundant literature. Looking from this literature, the physicians of past generations had different cognitions of the wind stroke, especially the concept of wind stroke. The connotation of wind stroke differed at different stages, going through a gradually changing process from exogenous disease, true wind stroke, apoplectic wind stroke to cerebral apoplexy.",
"title": ""
},
{
"docid": "bdaa8b87cdaef856b88b7397ddc77d97",
"text": "In artificial neural networks (ANNs), the activation function most used in practice are the logistic sigmoid function and the hyperbolic tangent function. The activation functions used in ANNs have been said to play an important role in the convergence of the learning algorithms. In this paper, we evaluate the use of different activation functions and suggest the use of three new simple functions, complementary log-log, probit and log-log, as activation functions in order to improve the performance of neural networks. Financial time series were used to evaluate the performance of ANNs models using these new activation functions and to compare their performance with some activation functions existing in the literature. This evaluation is performed through two learning algorithms: conjugate gradient backpropagation with Fletcher–Reeves updates and Levenberg–Marquardt.",
"title": ""
},
{
"docid": "d34759a882df6bc482b64530999bcda3",
"text": "The Static Single Assignment (SSA) form is a program representation used in many optimizing compilers. The key step in converting a program to SSA form is called φ-placement. Many algorithms for φ-placement have been proposed in the literature, but the relationships between these algorithms are not well understood.In this article, we propose a framework within which we systematically derive (i) properties of the SSA form and (ii) φ-placement algorithms. This framework is based on a new relation called merge which captures succinctly the structure of a program's control flow graph that is relevant to its SSA form. The φ-placement algorithms we derive include most of the ones described in the literature, as well as several new ones. We also evaluate experimentally the performance of some of these algorithms on the SPEC92 benchmarks.Some of the algorithms described here are optimal for a single variable. However, their repeated application is not necessarily optimal for multiple variables. We conclude the article by describing such an optimal algorithm, based on the transitive reduction of the merge relation, for multi-variable φ-placement in structured programs. The problem for general programs remains open.",
"title": ""
},
{
"docid": "7e9dbc7f1c3855972dbe014e2223424c",
"text": "Speech disfluencies (filled pauses, repe titions, repairs, and false starts) are pervasive in spontaneous speech. The ab ility to detect and correct disfluencies automatically is important for effective natural language understanding, as well as to improve speech models in general. Previous approaches to disfluency detection have relied heavily on lexical information, which makes them less applicable when word recognition is unreliable. We have developed a disfluency detection method using decision tree classifiers that use only local and automatically extracted prosodic features. Because the model doesn’t rely on lexical information, it is widely applicable even when word recognition is unreliable. The model performed significantly better than chance at detecting four disfluency types. It also outperformed a language model in the detection of false starts, given the correct transcription. Combining the prosody model with a specialized language model improved accuracy over either model alone for the detection of false starts. Results suggest that a prosody-only model can aid the automatic detection of disfluencies in spontaneous speech.",
"title": ""
},
{
"docid": "d24ca3024b5abc27f6eb2ad5698a320b",
"text": "Purpose. To study the fracture behavior of the major habit faces of paracetamol single crystals using microindentation techniques and to correlate this with crystal structure and molecular packing. Methods. Vicker's microindentation techniques were used to measure the hardness and crack lengths. The development of all the major radial cracks was analyzed using the Laugier relationship and fracture toughness values evaluated. Results. Paracetamol single crystals showed severe cracking and fracture around all Vicker's indentations with a limited zone of plastic deformation close to the indent. This is consistent with the material being a highly brittle solid that deforms principally by elastic deformation to fracture rather than by plastic flow. Fracture was associated predominantly with the (010) cleavage plane, but was also observed parallel to other lattice planes including (110), (210) and (100). The cleavage plane (010) had the lowest fracture toughness value, Kc = 0.041MPa m1/2, while the greatest value, Kc = 0.105MPa m1/2; was obtained for the (210) plane. Conclusions. Paracetamol crystals showed severe cracking and fracture because of the highly brittle nature of the material. The fracture behavior could be explained on the basis of the molecular packing arrangement and the calculated attachment energies across the fracture planes.",
"title": ""
}
] | scidocsrr |
d0e78e9ca94c071572481a37bbeda677 | Tempo And Beat Estimation Of Musical Signals | [
{
"docid": "05cf044dcb3621a0190403a7961ecb00",
"text": "This paper describes a real-time beat tracking system that recognizes a hierarchical beat structure comprising the quarter-note, half-note, and measure levels in real-world audio signals sampled from popular-music compact discs. Most previous beat-tracking systems dealt with MIDI signals and had difficulty in processing, in real time, audio signals containing sounds of various instruments and in tracking beats above the quarter-note level. The system described here can process music with drums and music without drums and can recognize the hierarchical beat structure by using three kinds of musical knowledge: of onset times, of chord changes, and of drum patterns. This paper also describes several applications of beat tracking, such as beat-driven real-time computer graphics and lighting control.",
"title": ""
},
{
"docid": "01bb8e6af86aa1545958a411653e014c",
"text": "Estimating the tempo of a musical piece is a complex problem, which has received an increasing amount of attention in the past few years. The problem consists of estimating the number of beats per minute (bpm) at which the music is played and identifying exactly when these beats occur. Commercial devices already exist that attempt to extract a musical instrument digital interface (MIDI) clock from an audio signal, indicating both the tempo and the actual location of the beat. Such MIDI clocks can then be used to synchronize other devices (such as drum machines and audio effects) to the audio source, enabling a new range of \" beat-synchronized \" audio processing. Beat detection can also simplify the usually tedious process of manipulating audio material in audio-editing software. Cut and paste operations are made considerably easier if markers are positioned at each beat or at bar boundaries. Looping a drum track over two bars becomes trivial once the location of the beats is known. A third range of applications is the fairly new area of automatic playlist generation, where a computer is given the task to choose a series of audio tracks from a track database in a way similar to what a human deejay would do. The track tempo is a very important selection criterion in this context , as deejays will tend to string tracks with similar tempi back to back. Furthermore, deejays also tend to perform beat-synchronous crossfading between successive tracks manually, slowing down or speeding up one of the tracks so that the beats in the two tracks line up exactly during the crossfade. This can easily be done automatically once the beats are located in the two tracks. The tempo detection systems commercially available appear to be fairly unsophisticated, as they rely mostly on the presence of a strong and regular bass-drum kick at every beat, an assumption that holds mostly with modern musical genres such as techno or drums and bass. For music with a less pronounced tempo such techniques fail miserably and more sophisticated algorithms are needed. This paper describes an off-line tempo detection algorithm , able to estimate a time-varying tempo from an audio track stored, for example, on an audio CD or on a computer hard disk. The technique works in three successive steps: 1) an \" energy flux \" signal is extracted from the track, 2) at each tempo-analysis time, several …",
"title": ""
}
] | [
{
"docid": "6db5f103fa479fc7c7c33ea67d7950f6",
"text": "Problem statement: To design, implement, and test an algorithm for so lving the square jigsaw puzzle problem, which has many applications in image processing, pattern recognition, and computer vision such as restoration of archeologica l artifacts and image descrambling. Approach: The algorithm used the gray level profiles of border pi xels for local matching of the puzzle pieces, which was performed using dynamic programming to facilita te non-rigid alignment of pixels of two gray level profiles. Unlike the classical best-first sea rch, the algorithm simultaneously located the neigh bors of a puzzle piece during the search using the wellknown Hungarian procedure, which is an optimal assignment procedure. To improve the search for a g lobal solution, every puzzle piece was considered as starting piece at various starting locations. Results: Experiments using four well-known images demonstrated the effectiveness of the proposed appr o ch over the classical piece-by-piece matching approach. The performance evaluation was based on a new precision performance measure. For all four test images, the proposed algorithm achieved 1 00% precision rate for puzzles up to 8×8. Conclusion: The proposed search mechanism based on simultaneou s all cation of puzzle pieces using the Hungarian procedure provided better performance than piece-by-piece used in classical methods.",
"title": ""
},
{
"docid": "88a15c0efdfeba3e791ea88862aee0c3",
"text": "Logic-based approaches to legal problem solving model the rule-governed nature of legal argumentation, justification, and other legal discourse but suffer from two key obstacles: the absence of efficient, scalable techniques for creating authoritative representations of legal texts as logical expressions; and the difficulty of evaluating legal terms and concepts in terms of the language of ordinary discourse. Data-centric techniques can be used to finesse the challenges of formalizing legal rules and matching legal predicates with the language of ordinary parlance by exploiting knowledge latent in legal corpora. However, these techniques typically are opaque and unable to support the rule-governed discourse needed for persuasive argumentation and justification. This paper distinguishes representative legal tasks to which each approach appears to be particularly well suited and proposes a hybrid model that exploits the complementarity of each.",
"title": ""
},
{
"docid": "c711fa74e32891553404b989c1ee1b44",
"text": "This paper presents a fully actuated UAV platform with a nonparallel design. Standard multirotor UAVs equipped with a number of parallel thrusters would result in underactuation. Fighting horizontal wind would require the robot to tilt its whole body toward the direction of the wind. We propose a hexrotor UAV with nonparallel thrusters which results in faster response to disturbances for precision position keeping. A case study is presented to show that hexrotor with a nonparallel design takes less time to resist wind gust than a standard design. We also give the results of a staged peg-in-hole task that measures the rising time of exerting forces using different actuation mechanisms.",
"title": ""
},
{
"docid": "faa5037145abef48d2acf5435df97bf2",
"text": "This clinical report describes the rehabilitation of a patient with a history of mandibulectomy that involved the use of a fibula free flap and an implant-supported fixed complete denture. A recently introduced material, polyetherketoneketone (PEKK), was used as the framework material for the prosthesis, and the treatment produced favorable esthetic and functional results.",
"title": ""
},
{
"docid": "1ee352ff083da1f307674414a5640d64",
"text": "The present article examines personality as a predictor of college achievement beyond the traditional predictors of high school grades (HSGPA) and SAT scores. In an undergraduate sample (N=131), self and informant-rated conscientiousness using the Big Five Inventory (BFI; John, Donahue & Kentle, 1991) robustly correlated with academic achievement as indexed by both freshman GPA and senior GPA. A model including traditional predictors and informant ratings of conscientiousness accounted for 18% of the variance in freshman GPA and 37% of the variance in senior GPA; conscientiousness alone explained unique variance in senior GPA beyond the traditional predictors, even when freshman GPA was included in the model. Conscientiousness is a valid and unique predictor of college performance, and informant ratings may be useful in its assessment for this purpose. Acquaintance reports 3 Acquaintance reports of personality and academic achievement: A case for conscientiousness The question of what makes a good student “good” lies at the core of a socially-relevant discussion of college admissions criteria. While past research has shown personality variables to be related to school performance (e.g. Costa & McCrae, 1992; De Raad, 1996), academic achievement is still widely assumed to be more a function of intellectual ability than personality. The purpose of this study is to address two ambiguities that trouble past research in this area: the choice of conceptually appropriate outcome measures and the overuse of self-report data. A highly influential meta-analysis by Barrick and Mount (1991) concluded that conscientiousness is a robust and valid predictor of job performance across all criteria and occupations. Soon after, Costa and McCrae (1992) provided evidence that conscientiousness is likewise related to academic performance. This finding has been replicated by others (recently, Chamorro-Premuzic & Farnham, 2003a and 2003b). Moreover, conscientiousness appears to be free of some of the unwanted complications associated with ability as assessed by the SAT: Hogan and Hogan (1995) reported that personality inventories generally do not systematically discriminate against any ethnic or national group, and thus may offer more equitable bases for selection (see also John, et al., 1991). Still, skepticism remains. Farsides and Woodfield (2003) called the relationship between personality variables and academic performance in previous literature “erratic and, where present, modest” (p. 1229). Green, Peters and Webster (1991) found academic success only weakly associated with personality factors; Rothstien, Paunonen, Rush and King (1994) found that the Big Five factors failed to significantly predict academic performance criteria among a sample of MBA students; Allik and Realo (1997) and Diseth (2003) found most of the valid variance in achievement to be unrelated to personality. Acquaintance reports 4 The current study seeks to address two pervasive obstructions to conceptual clarity in the previous literature: 1) Lack of consistency in the measurement of “academic achievement.” Past studies have used individual exam grades, final grades in a single course, semester GPA, year-end GPA, GPA at the time of the study, or variables such as attendance or participation. The present study uses concrete and consequential outcomes: freshman cumulative GPA (fGPA; the measure most commonly employed in previous research) and senior cumulative GPA (sGPA; a final, more comprehensive measure of college success.). 2) Near-exclusive use of self-report personality measures. Reliance on self-reports can be problematic because what one believes of oneself may or may not be an accurate or complete assessment of one’s true strengths and weaknesses. Thus, the present research utilizes ratings of personality provided by the self and by informants. As the personality inventories used in these analyses were administered up to three years prior to the measurement of the dependent variable, finding a meaningful relationship between the two will provide evidence that one’s traits – evaluated by someone else and a number of years in the past! – are consistent enough to serve as useful predictors of a real and important outcome. Within the confines of these parameters, and based upon previous literature, it is hypothesized that: conscientiousness will fail to show the mean differences in ethnicity problematic of SAT scores; both selfand informant-rated conscientiousness will be positively and significantly related to both outcome measures; and finally, conscientiousness will be capable of explaining incremental variance in both outcome measures beyond what is accounted for by the traditional predictors. Method Acquaintance reports 5 Participants This study examined the predictors of academic achievement in a sample of 131 target participants (54.2% female, 45.8% male), who were those among an original sample of 217 undergraduates with sufficiently complete data for the present analyses, as described below. Due to the “minority majority” status of the UCR campus population, the diverse sample included substantial proportions of Asians or Asian Americans (43.5%), Hispanics or Latin Americans (19.8%), Caucasians (16.0%), African Americans (12.9%), and students of other ethnic descent (7.6%). The study also includes 258 informants who described participants with whom they were acquainted. Each target participant and informant was paid $10 per hour. The larger data set was originally designed to explore issues of accuracy in personality judgment. Other analyses, completed and planned (see Letzring, Block & Funder, 2004; Letzring, Wells & Funder, in press; Vazire & Funder, 2006), address different topics and do not overlap with those in the current study. Targets & Informants To deal with missing data, all participants in the larger sample who were lacking any one of the predictor variables (SAT score, HSGPA, or either selfor informant-rated Conscientiousness) were dropped (reducing the N from 217 to 153 at this stage of selection). Among the remaining participants, 21 were missing sGPA (i.e., had not yet graduated at the time the GPAs were collected from the University) but had a junior-level GPA; for these, a regression using junior GPA to predict sGPA was performed (r = 0.96 between the two) and the resulting score was imputed. 22 participants had neither sGPA nor a junior GPA; these last were dropped, leaving the final N = 131 for target participants. Means and standard deviations for both the Acquaintance reports 6 dependent and predictor variables in this smaller sample were comparable to those of the larger group from which they were drawn. Each participant provided contact information for two people who knew him or her best and would be willing to provide information about him or her. 127 participants in our target sample recruited the requested 2 informants, while 4 participants recruited only 1, for a total of 258 informants. Measures Traditional Predictors Participants completed a release form granting access to their academic records; HSGPA and SAT scores were later obtained from the UCR Registrar’s Office. The Registrar provided either an SAT score or an SAT score converted from an American College Testing (ACT) score. Only the total score (rather than the separate verbal/quantitative sub-scores) was used. Personality In order to assess traits at a global level, participants provided self-reports and informants provided peer ratings using the Big Five Inventory (BFI; John, Donahue & Kentle, 1991), which assesses extraversion, agreeableness, conscientiousness, neuroticism, and openness to experience. BFI-scale reliabilities and other psychometric properties have been shown to be similar to those of the much longer scales of Costa and McCrae’s (1990) NEO-FFI (John, et al. 1991). Where two informants were available (all but 4 cases), a composite of their ratings was created by averaging the conscientiousness scale scores. Reliability of the averaged informants’ conscientiousness rating was .59. Academic performance Acquaintance reports 7 Cumulative fGPA and sGPA were collected from the campus Registrar. While the data collection phase of the original, larger project began a few years before the analyses completed for this study and all of the participants had progressed in their academic standing, not all of them had yet completed their senior year. Participants missing GPA data were handled as described above. Results Analyses examined mean differences among ethnic groups and correlations between each of the potential predictors and the two outcome measures. A final set of analyses entered the predictor variables into hierarchical regressions predicting GPA. Descriptive Statistics Mean differences by ethnicity in HSGPA, SAT scores, and BFI scores were examined with one-way ANOVAs (see Table 1). Members of the different groups were admitted to UCR with approximately the same incoming HSGPA (M = 3.51) and very little variation (SD = 0.37), F(4, 126) = 0.68, p = 0.609. There was, however, a significant difference between ethnicities in their entering SAT scores, F(4, 126) = 5.56, p = 3.7 x 10, with Caucasians the highest and African Americans the lowest. As predicted, there were no significant differences in conscientiousness across ethnicities. Correlations There were no significant correlations between gender and any of the variables included in this study. HSGPA and SAT scores – the two traditional predictors – are only modestly related in this sample: r(131) = 0.12, n.s., indicating that they are independently capable of explaining variance in college GPA. sGPA, containing all the variance of fGPA, is thus well correlated with it, r(131) = 0.68, p < .05. Correlations between academic performance and the Acquaintance reports 8 hypothesized predictors of performance (HSGPA, SAT scores, and conscientiousness) are presented in Table 2. While the traditional ",
"title": ""
},
{
"docid": "881325bbeb485fc405c2cb77f9a12dfb",
"text": "Drawing on social capital theory, this study examined whether college students’ self-disclosure on a social networking site was directly associated with social capital, or related indirectly through the degree of positive feedback students got from Internet friends. Structural equation models applied to anonymous, self-report survey data from 264 first-year students at 3 universities in Beijing, China, indicated direct effects on bridging social capital and indirect effects on bonding social capital. Effects remained significant, though modest in magnitude, after controlling for social skills level. Findings suggest ways in which social networking sites can foster social adjustment as an adolescent transition to residential col-",
"title": ""
},
{
"docid": "095dbdc1ac804487235cdd0aeffe8233",
"text": "Sentiment analysis is the task of identifying whether the opinion expressed in a document is positive or negative about a given topic. Unfortunately, many of the potential applications of sentiment analysis are currently infeasible due to the huge number of features found in standard corpora. In this paper we systematically evaluate a range of feature selectors and feature weights with both Naı̈ve Bayes and Support Vector Machine classifiers. This includes the introduction of two new feature selection methods and three new feature weighting methods. Our results show that it is possible to maintain a state-of-the art classification accuracy of 87.15% while using less than 36% of the features.",
"title": ""
},
{
"docid": "46200c35a82b11d989c111e8398bd554",
"text": "A physics-based compact gallium nitride power semiconductor device model is presented in this work, which is the first of its kind. The model derivation is based on the classical drift-diffusion model of carrier transport, which expresses the channel current as a function of device threshold voltage and externally applied electric fields. The model is implemented in the Saber® circuit simulator using the MAST hardware description language. The model allows the user to extract the parameters from the dc I-V and C-V characteristics that are also available in the device datasheets. A commercial 80 V EPC GaN HEMT is used to demonstrate the dynamic validation of the model against the transient device characteristics in a double-pulse test and a boost converter circuit configuration. The simulated versus measured device characteristics show good agreement and validate the model for power electronics design and applications using the next generation of GaN HEMT devices.",
"title": ""
},
{
"docid": "88afb98c0406d7c711b112fbe2a6f25e",
"text": "This paper provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time. Firms are assumed to have always been oriented toward accumulating and applying knowledge to create economic value and competitive advantage. We therefore suggest the need for a KMPI which we have defined as a logistic function having five components that can be used to determine the knowledge circulation process (KCP): knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. When KCP efficiency increases, KMPI will also expand, enabling firms to become knowledgeintensive. To prove KMPI’s contribution, a questionnaire survey was conducted on 101 firms listed in the KOSDAQ market in Korea. We associated KMPI with three financial measures: stock price, price earnings ratio (PER), and R&D expenditure. Statistical results show that the proposed KMPI can represent KCP efficiency, while the three financial performance measures are also useful. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "117242011595e9c7de501a2360199a48",
"text": "This paper proposes a supervised learning approach to jointly perform facial Action Unit (AU) localisation and intensity estimation. Contrary to previous works that try to learn an unsupervised representation of the Action Unit regions, we propose to directly and jointly estimate all AU intensities through heatmap regression, along with the location in the face where they cause visible changes. Our approach aims to learn a pixel-wise regression function returning a score per AU, which indicates an AU intensity at a given spatial location. Heatmap regression then generates an image, or channel, per AU, in which each pixel indicates the corresponding AU intensity. To generate the ground-truth heatmaps for a target AU, the facial landmarks are first estimated, and a 2D Gaussian is drawn around the points where the AU is known to cause changes. The amplitude and size of the Gaussian is determined by the intensity of the AU. We show that using a single Hourglass network suffices to attain new state of the art results, demonstrating the effectiveness of such a simple approach. The use of heatmap regression allows learning of a shared representation between AUs without the need to rely on latent representations, as these are implicitly learned from the data. We validate the proposed approach on the BP4D dataset, showing a modest improvement on recent, complex, techniques, as well as robustness against misalignment errors. Code for testing and models will be available to download from https://github.com/ESanchezLozano/ Action-Units-Heatmaps.",
"title": ""
},
{
"docid": "353761bae5088e8ee33025fc04695297",
"text": " Land use can exert a powerful influence on ecological systems, yet our understanding of the natural and social factors that influence land use and land-cover change is incomplete. We studied land-cover change in an area of about 8800 km2 along the lower part of the Wisconsin River, a landscape largely dominated by agriculture. Our goals were (a) to quantify changes in land cover between 1938 and 1992, (b) to evaluate the influence of abiotic and socioeconomic variables on land cover in 1938 and 1992, and (c) to characterize the major processes of land-cover change between these two points in time. The results showed a general shift from agricultural land to forest. Cropland declined from covering 44% to 32% of the study area, while forests and grassland both increased (from 32% to 38% and from 10% to 14% respectively). Multiple linear regressions using three abiotic and two socioeconomic variables captured 6% to 36% of the variation in land-cover categories in 1938 and 9% to 46% of the variation in 1992. Including socioeconomic variables always increased model performance. Agricultural abandonment and a general decline in farming intensity were the most important processes of land-cover change among the processes considered. Areas characterized by the different processes of land-cover change differed in the abiotic and socioeconomic variables that had explanatory power and can be distinguished spatially. Understanding the dynamics of landscapes dominated by human impacts requires methods to incorporate socioeconomic variables and anthropogenic processes in the analyses. Our method of hypothesizing and testing major anthropogenic processes may be a useful tool for studying the dynamics of cultural landscapes.",
"title": ""
},
{
"docid": "db637c4e90111ebe0218fa4ccc2ce759",
"text": "Existing datasets for natural language inference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of largescale question answering datasets. Our approach hinges on learning a sentence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be successfully applied to a variety of other QA resources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenomena rarely seen in previous NLI datasets.",
"title": ""
},
{
"docid": "3823f92483c12b7ff9c7b5b9a020088f",
"text": "This paper addresses the credit card fraud detection problem in the context of Big Data, based on machine learning techniques. In the fraud detection task, typically the available datasets for ML training present some peculiarities, such as the unavoidable condition of a strong class imbalance, the existence of unlabeled transactions, and the large number of records that must be processed. The present paper aims to propose a methodology for automatic detection of fraudulent transactions, that tackle all these problems. The methodology is based on a Balanced Random Forest, that can be used in supervised and semi-supervised scenarios through a co-training approach. Two different schemes for the co-training approach are tested, in order to overcome the class imbalance problem. Moreover, a Spark platform and Hadoop file system support our solution, in order to enable the scalability of the proposed solution. The proposed approach achieves an absolute improvement of around 24% in terms of geometric mean in comparison to a standard random forest learning strategy.",
"title": ""
},
{
"docid": "a530b9b997f6e471f74beca325038067",
"text": "Do you remember insult sword fi ghting in Monkey Island? The moment when you got off the elevator in the fourth mission of Call of Duty: Modern Warfare 2? Your romantic love affair with Leliana or Alistair in Dragon Age? Dancing as Madison for Paco in his nightclub in Heavy Rain? Climbing and fi ghting Cronos in God of War 3? Some of the most memorable moments from successful video games, have a strong emotional impact on us. It is only natural that game designers and user researchers are seeking methods to better understand the positive and negative emotions that we feel when we are playing games. While game metrics provide excellent methods and techniques to infer behavior from the interaction of the player in the virtual game world, they cannot infer or see emotional signals of a player. Emotional signals are observable changes in the state of the human player, such as facial expressions, body posture, or physiological changes in the player’s body. The human eye can observe facial expression, gestures or human sounds that could tell us how a player is feeling, but covert physiological changes are only revealed to us when using sensor equipment, such as",
"title": ""
},
{
"docid": "83b79fc95e90a303f29a44ef8730a93f",
"text": "Internet of Things (IoT) is a concept that envisions all objects around us as part of internet. IoT coverage is very wide and includes variety of objects like smart phones, tablets, digital cameras and sensors. Once all these devices are connected to each other, they enable more and more smart processes and services that support our basic needs, environment and health. Such enormous number of devices connected to internet provides many kinds of services. They also produce huge amount of data and information. Cloud computing is one such model for on-demand access to a shared pool of configurable resources (computer, networks, servers, storage, applications, services, and software) that can be provisioned as infrastructures ,software and applications. Cloud based platforms help to connect to the things around us so that we can access anything at any time and any place in a user friendly manner using customized portals and in built applications. Hence, cloud acts as a front end to access IoT. Applications that interact with devices like sensors have special requirements of massive storage to store big data, huge computation power to enable the real time processing of the data, information and high speed network to stream audio or video. Here we have describe how Internet of Things and Cloud computing can work together can address the Big Data problems. We have also illustrated about Sensing as a service on cloud using few applications like Augmented Reality, Agriculture, Environment monitoring,etc. Finally, we propose a prototype model for providing sensing as a service on cloud.",
"title": ""
},
{
"docid": "79041480e35083e619bd804423459f2b",
"text": "Dynamic pricing is the dynamic adjustment of prices to consumers depending upon the value these customers attribute to a product or service. Today’s digital economy is ready for dynamic pricing; however recent research has shown that the prices will have to be adjusted in fairly sophisticated ways, based on sound mathematical models, to derive the benefits of dynamic pricing. This article attempts to survey different models that have been used in dynamic pricing. We first motivate dynamic pricing and present underlying concepts, with several examples, and explain conditions under which dynamic pricing is likely to succeed. We then bring out the role of models in computing dynamic prices. The models surveyed include inventory-based models, data-driven models, auctions, and machine learning. We present a detailed example of an e-business market to show the use of reinforcement learning in dynamic pricing.",
"title": ""
},
{
"docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c",
"text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.",
"title": ""
},
{
"docid": "9a758183aa6bf6ee8799170b5a526e7e",
"text": "The field of serverless computing has recently emerged in support of highly scalable, event-driven applications. A serverless application is a set of stateless functions, along with the events that should trigger their activation. A serverless runtime allocates resources as events arrive, avoiding the need for costly pre-allocated or dedicated hardware. \nWhile an attractive economic proposition, serverless computing currently lags behind the state of the art when it comes to function composition. This paper addresses the challenge of programming a composition of functions, where the composition is itself a serverless function. \nWe demonstrate that engineering function composition into a serverless application is possible, but requires a careful evaluation of trade-offs. To help in evaluating these trade-offs, we identify three competing constraints: functions should be considered as black boxes; function composition should obey a substitution principle with respect to synchronous invocation; and invocations should not be double-billed. \nFurthermore, we argue that, if the serverless runtime is limited to a reactive core, i.e. one that deals only with dispatching functions in response to events, then these constraints form the serverless trilemma. Without specific runtime support, compositions-as-functions must violate at least one of the three constraints. \nFinally, we demonstrate an extension to the reactive core of an open-source serverless runtime that enables the sequential composition of functions in a trilemma-satisfying way. We conjecture that this technique could be generalized to support other combinations of functions.",
"title": ""
},
{
"docid": "7e6bc406394f5621b02acb9f0187667f",
"text": "A model predictive control (MPC) approach to active steering is presented for autonomous vehicle systems. The controller is designed to stabilize a vehicle along a desired path while rejecting wind gusts and fulfilling its physical constraints. Simulation results of a side wind rejection scenario and a double lane change maneuver on slippery surfaces show the benefits of the systematic control methodology used. A trade-off between the vehicle speed and the required preview on the desired path for vehicle stabilization is highlighted",
"title": ""
},
{
"docid": "a68ccab91995603b3dbb54e014e79091",
"text": "Qualitative models arising in artificial intelligence domain often concern real systems that are difficult to represent with traditional means. However, some promise for dealing with such systems is offered by research in simulation methodology. Such research produces models that combine both continuous and discrete-event formalisms. Nevertheless, the aims and approaches of the AI and the simulation communities remain rather mutually ill understood. Consequently, there is a need to bridge theory and methodology in order to have a uniform language when either analyzing or reasoning about physical systems. This article introduces a methodology and formalism for developing multiple, cooperative models of physical systems of the type studied in qualitative physics. The formalism combines discrete-event and continuous models and offers an approach to building intelligent machines capable of physical modeling and reasoning.",
"title": ""
}
] | scidocsrr |
fbe7cfaf6c9981468179c6654f36d700 | Validating viral marketing strategies in Twitter via agent-based social simulation | [
{
"docid": "1991322dce13ee81885f12322c0e0f79",
"text": "The quality of the interpretation of the sentiment in the online buzz in the social media and the online news can determine the predictability of financial markets and cause huge gains or losses. That is why a number of researchers have turned their full attention to the different aspects of this problem lately. However, there is no well-rounded theoretical and technical framework for approaching the problem to the best of our knowledge. We believe the existing lack of such clarity on the topic is due to its interdisciplinary nature that involves at its core both behavioral-economic topics as well as artificial intelligence. We dive deeper into the interdisciplinary nature and contribute to the formation of a clear frame of discussion. We review the related works that are about market prediction based on onlinetext-mining and produce a picture of the generic components that they all have. We, furthermore, compare each system with the rest and identify their main differentiating factors. Our comparative analysis of the systems expands onto the theoretical and technical foundations behind each. This work should help the research community to structure this emerging field and identify the exact aspects which require further research and are of special significance. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b4d58813c09030e1c68b4fb573d45389",
"text": "With the empirical evidence that Twitter influences the financial market, there is a need for a bottom-up approach focusing on individual Twitter users and their message propagation among a selected Twitter community with regard to the financial market. This paper presents an agent-based simulation framework to model the Twitter network growth and message propagation mechanism in the Twitter financial community. Using the data collected through the Twitter API, the model generates a dynamic community network with message propagation rates by different agent types. The model successfully validates against the empirical characteristics of the Twitter financial community in terms of network demographics and aggregated message propagation pattern. Simulation of the 2013 Associated Press hoax incident demonstrates that removing critical nodes of the network (users with top centrality) dampens the message propagation process linearly and critical node of the highest betweenness centrality has the optimal effect in reducing the spread of the malicious message to lesser ratio of the community.",
"title": ""
}
] | [
{
"docid": "eb3e3c7b255cd8f86a4b02d1f2c23a83",
"text": "Style transfer is a process of migrating a style from a given image to the content of another, synthesizing a new image, which is an artistic mixture of the two. Recent work on this problem adopting convolutional neural-networks (CNN) ignited a renewed interest in this field, due to the very impressive results obtained. There exists an alternative path toward handling the style transfer task, via the generalization of texture synthesis algorithms. This approach has been proposed over the years, but its results are typically less impressive compared with the CNN ones. In this paper, we propose a novel style transfer algorithm that extends the texture synthesis work of Kwatra et al. (2005), while aiming to get stylized images that are closer in quality to the CNN ones. We modify Kwatra’s algorithm in several key ways in order to achieve the desired transfer, with emphasis on a consistent way for keeping the content intact in selected regions, while producing hallucinated and rich style in others. The results obtained are visually pleasing and diverse, shown to be competitive with the recent CNN style transfer algorithms. The proposed algorithm is fast and flexible, being able to process any pair of content + style images.",
"title": ""
},
{
"docid": "363dc30dbf42d5309366ec109c445c48",
"text": "There has been significant recent interest in fast imaging with sparse sampling. Conventional imaging methods are based on Shannon-Nyquist sampling theory. As such, the number of required samples often increases exponentially with the dimensionality of the image, which limits achievable resolution in high-dimensional scenarios. The partially-separable function (PSF) model has previously been proposed to enable sparse data sampling in this context. Existing methods to leverage PSF structure utilize tailored data sampling strategies, which enable a specialized two-step reconstruction procedure. This work formulates the PSF reconstruction problem using the matrix-recovery framework. The explicit matrix formulation provides new opportunities for data acquisition and image reconstruction with rank constraints. Theoretical results from the emerging field of low-rank matrix recovery (which generalizes theory from sparse-vector recovery) and our empirical results illustrate the potential of this new approach.",
"title": ""
},
{
"docid": "06e50887ddec8b0e858173499ce2ee11",
"text": "Over the last few years, we've seen a plethora of Internet of Things (IoT) solutions, products, and services make their way into the industry's marketplace. All such solutions will capture large amounts of data pertaining to the environment as well as their users. The IoT's objective is to learn more and better serve system users. Some IoT solutions might store data locally on devices (\"things\"), whereas others might store it in the cloud. The real value of collecting data comes through data processing and aggregation on a large scale, where new knowledge can be extracted. However, such procedures can lead to user privacy issues. This article discusses some of the main challenges of privacy in the IoT as well as opportunities for research and innovation. The authors also introduce some of the ongoing research efforts that address IoT privacy issues.",
"title": ""
},
{
"docid": "be8b65d39ee74dbee0835052092040da",
"text": "We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact. Adopting a straightforward decomposition of the problem into entity detection, entity linking, relation prediction, and evidence combination, we explore simple yet strong baselines. On the popular SIMPLEQUESTIONS dataset, we find that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art, and techniques that do not use neural networks also perform reasonably well. These results show that gains from sophisticated deep learning techniques proposed in the literature are quite modest and that some previous models exhibit unnecessary complexity.",
"title": ""
},
{
"docid": "83856fb0a5e53c958473fdf878b89b20",
"text": "Due to the expensive nature of an industrial robot, not all universities are equipped with areal robots for students to operate. Learning robotics without accessing to an actual robotic system has proven to be difficult for undergraduate students. For instructors, it is also an obstacle to effectively teach fundamental robotic concepts. Virtual robot simulator has been explored by many researchers to create a virtual environment for teaching and learning. This paper presents structure of a course project which requires students to develop a virtual robot simulator. The simulator integrates concept of kinematics, inverse kinematics and controls. Results show that this approach assists and promotes better students‟ understanding of robotics.",
"title": ""
},
{
"docid": "f0d55892fb927c5c5324cfb7b8380bda",
"text": "The paper presents application of data mining methods for recognizing the most significant genes and gene sequences (treated as features) stored in a dataset of gene expression microarray. The investigations are performed for autism data. Few chosen methods of feature selection have been applied and their results integrated in the final outcome. In this way we find the contents of small set of the most important genes associated with autism. They have been applied in the classification procedure aimed on recognition of autism from reference group members. The results of numerical experiments concerning selection of the most important genes and classification of the cases on the basis of the selected genes will be discussed. The main contribution of the paper is in developing the fusion system of the results of many selection approaches into the final set, most closely associated with autism. We have also proposed special procedure of estimating the number of highest rank genes used in classification procedure. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dfa611e19a3827c66ea863041a3ef1e2",
"text": "We study the problem of malleability of Bitcoin transactions. Our first two contributions can be summarized as follows: (i) we perform practical experiments on Bitcoin that show that it is very easy to maul Bitcoin transactions with high probability, and (ii) we analyze the behavior of the popular Bitcoin wallets in the situation when their transactions are mauled; we conclude that most of them are to some extend not able to handle this situation correctly. The contributions in points (i) and (ii) are experimental. We also address a more theoretical problem of protecting the Bitcoin distributed contracts against the “malleability” attacks. It is well-known that malleability can pose serious problems in some of those contracts. It concerns mostly the protocols which use a “refund” transaction to withdraw a financial deposit in case the other party interrupts the protocol. Our third contribution is as follows: (iii) we show a general method for dealing with the transaction malleability in Bitcoin contracts. In short: this is achieved by creating a malleability-resilient “refund” transaction which does not require any modification of the Bitcoin protocol.",
"title": ""
},
{
"docid": "e6e1b1e282449e8c75be714ff022ce39",
"text": "AIMS\nThe aims of this paper were (1) to raise awareness of the issues in questionnaire development and subsequent psychometric evaluation, and (2) to provide strategies to enable nurse researchers to design and develop their own measure and evaluate the quality of existing nursing measures.\n\n\nBACKGROUND\nThe number of questionnaires developed by nurses has increased in recent years. While the rigour applied to the questionnaire development process may be improving, we know that nurses are still not generally adept at the psychometric evaluation of new measures. This paper explores the process by which a reliable and valid questionnaire can be developed.\n\n\nMETHODS\nWe critically evaluate the theoretical and methodological issues associated with questionnaire design and development and present a series of heuristic decision-making strategies at each stage of such development. The range of available scales is presented and we discuss strategies to enable item generation and development. The importance of stating a priori the number of factors expected in a prototypic measure is emphasized. Issues of reliability and validity are explored using item analysis and exploratory factor analysis and illustrated using examples from recent nursing research literature.\n\n\nCONCLUSION\nQuestionnaire design and development must be supported by a logical, systematic and structured approach. To aid this process we present a framework that supports this and suggest strategies to demonstrate the reliability and validity of the new and developing measure.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nIn developing the evidence base of nursing practice using this method of data collection, it is vital that questionnaire design incorporates preplanned methods to establish reliability and validity. Failure to develop a questionnaire sufficiently may lead to difficulty interpreting results, and this may impact upon clinical or educational practice. This paper presents a critical evaluation of the questionnaire design and development process and demonstrates good practice at each stage of this process.",
"title": ""
},
{
"docid": "f320e7f092040e72de062dc8203bbcfb",
"text": "This research provides a security assessment of the Android framework-Google's software stack for mobile devices. The authors identify high-risk threats to the framework and suggest several security solutions for mitigating them.",
"title": ""
},
{
"docid": "c62bc7391e55d66c9e27befe81446ebe",
"text": "Opaque predicates have been widely used to insert superfluous branches for control flow obfuscation. Opaque predicates can be seamlessly applied together with other obfuscation methods such as junk code to turn reverse engineering attempts into arduous work. Previous efforts in detecting opaque predicates are far from mature. They are either ad hoc, designed for a specific problem, or have a considerably high error rate. This paper introduces LOOP, a Logic Oriented Opaque Predicate detection tool for obfuscated binary code. Being different from previous work, we do not rely on any heuristics; instead we construct general logical formulas, which represent the intrinsic characteristics of opaque predicates, by symbolic execution along a trace. We then solve these formulas with a constraint solver. The result accurately answers whether the predicate under examination is opaque or not. In addition, LOOP is obfuscation resilient and able to detect previously unknown opaque predicates. We have developed a prototype of LOOP and evaluated it with a range of common utilities and obfuscated malicious programs. Our experimental results demonstrate the efficacy and generality of LOOP. By integrating LOOP with code normalization for matching metamorphic malware variants, we show that LOOP is an appealing complement to existing malware defenses.",
"title": ""
},
{
"docid": "05518ac3a07fdfb7bfede8df8a7a500b",
"text": "The prevalence of food allergy is rising for unclear reasons, with prevalence estimates in the developed world approaching 10%. Knowledge regarding the natural course of food allergies is important because it can aid the clinician in diagnosing food allergies and in determining when to consider evaluation for food allergy resolution. Many food allergies with onset in early childhood are outgrown later in childhood, although a minority of food allergy persists into adolescence and even adulthood. More research is needed to improve food allergy diagnosis, treatment, and prevention.",
"title": ""
},
{
"docid": "189522686d83ff7761afe6e105bec409",
"text": "This paper emphasizes on safeguarding the hierarchical structure in wireless sensor network by presenting an Intrusion Detection Technique, which is very useful, simple and works effectively in improving and enhancing the security in wireless sensor Network. This IDS works on the combination of anomaly detection algorithm and intrusion detection nodes. Here all the features and numerous architectures of popular IDS(s) along with their confines and benefits are also being described.",
"title": ""
},
{
"docid": "83991055d207c47bc2d5af0d83bfcf9c",
"text": "BACKGROUND\nThe present study aimed at investigating the role of depression and attachment styles in predicting cell phone addiction.\n\n\nMETHODS\nIn this descriptive correlational study, a sample including 100 students of Payame Noor University (PNU), Reyneh Center, Iran, in the academic year of 2013-2014 was selected using volunteer sampling. Participants were asked to complete the adult attachment inventory (AAI), Beck depression inventory-13 (BDI-13) and the cell phone overuse scale (COS).\n\n\nFINDINGS\nResults of the stepwise multiple regression analysis showed that depression and avoidant attachment style were the best predictors of students' cell phone addiction (R(2) = 0.23).\n\n\nCONCLUSION\nThe results of this study highlighted the predictive value of depression and avoidant attachment style concerning students' cell phone addiction.",
"title": ""
},
{
"docid": "e32068682c313637f97718e457914381",
"text": "Optimal load shedding is a very critical issue in power systems. It plays a vital role, especially in third world countries. A sudden increase in load can affect the important parameters of the power system like voltage, frequency and phase angle. This paper presents a case study of Pakistan’s power system, where the generated power, the load demand, frequency deviation and load shedding during a 24-hour period have been provided. An artificial neural network ensemble is aimed for optimal load shedding. The objective of this paper is to maintain power system frequency stability by shedding an accurate amount of load. Due to its fast convergence and improved generalization ability, the proposed algorithm helps to deal with load shedding in an efficient manner.",
"title": ""
},
{
"docid": "114ec493a4b0b26c643a49bc0cc3c9c7",
"text": "Automatic emotion recognition has attracted great interest and numerous solutions have been proposed, most of which focus either individually on facial expression or acoustic information. While more recent research has considered multimodal approaches, individual modalities are often combined only by simple fusion at the feature and/or decision-level. In this paper, we introduce a novel approach using 3-dimensional convolutional neural networks (C3Ds) to model the spatio-temporal information, cascaded with multimodal deep-belief networks (DBNs) that can represent the audio and video streams. Experiments conducted on the eNTERFACE multimodal emotion database demonstrate that this approach leads to improved multimodal emotion recognition performance and significantly outperforms recent state-of-the-art proposals.",
"title": ""
},
{
"docid": "3d301d13d54b0abd5157b4640820ae0a",
"text": "Plant hormones regulate many aspects of plant growth and development. Both auxin and cytokinin have been known for a long time to act either synergistically or antagonistically to control several significant developmental processes, such as the formation and maintenance of meristem. Over the past few years, exciting progress has been made to reveal the molecular mechanisms underlying the auxin-cytokinin action and interaction. In this review, we shall briefly discuss the major progress made in auxin and cytokinin biosynthesis, auxin transport, and auxin and cytokinin signaling. The frameworks for the complicated interaction of these two hormones in the control of shoot apical meristem and root apical meristem formation as well as their roles in in vitro organ regeneration are the major focus of this review.",
"title": ""
},
{
"docid": "f69b170e9ccd7f04cbc526373b0ad8ee",
"text": "meaning (overall M = 5.89) and significantly higher than with any of the other three abstract meanings (overall M = 2.05, all ps < .001). Procedure. Under a cover story of studying advertising slogans, participants saw one of the 22 target brands and thought about its abstract concept in memory. They were then presented, on a single screen, with four alternative slogans (in random order) for the target brand and were asked to rank the slogans, from 1 (“best”) to 4 (“worst”), in terms of how well the slogan fits the image of the target brand. Each slogan was intended to distinctively communicate the abstract meaning associated with one of the four high-levelmeaning associated with one of the four high-level brand value dimensions uncovered in the pilot study. After a series of filler tasks, participants indicated their attitude toward the brand on a seven-point scale (1 = “very unfavorable,” and 7 = “very favorable”). Ranking of the slogans. We conducted separate nonparametric Kruskal-Wallis tests on each country’s data to evaluate differences in the rank order for each of the four slogans among the four types of brand concepts. In all countries, the tests were significant (the United States: all 2(3, N = 539) ≥ 145.4, all ps < .001; China: all 2(3, N = 208) ≥ 52.8, all ps < .001; Canada: all 2(3, N = 380) ≥ 33.3, all ps < .001; Turkey: all 2(3, N = 380) ≥ 51.0, all ps < .001). We pooled the data from the four countries and conducted follow-up tests to evaluate pairwise differences in the rank order of each slogan among the four brand concepts, controlling for Type I error across tests using the Bonferroni approach. The results of these tests indicated that each slogan was ranked at the top in terms of favorability when it matched the brand concept (self-enhancement brand concept: Mself-enhancement slogan = 1.77; openness brand FIGURE 2 Structural Relations Among Value Dimensions from Multidimensional Scaling (Pilot: Study 1) b = benevolence, t = tradition, c = conformity, sec = security S e l f E n h a n c e m e n t IN D VID U A L C O N C ER N S C O LL EC TI VE C O N C ER N S",
"title": ""
},
{
"docid": "ead5432cb390756a99e4602a9b6266bf",
"text": "In this paper, we present a new approach for text localization in natural images, by discriminating text and non-text regions at three levels: pixel, component and text line levels. Firstly, a powerful low-level filter called the Stroke Feature Transform (SFT) is proposed, which extends the widely-used Stroke Width Transform (SWT) by incorporating color cues of text pixels, leading to significantly enhanced performance on inter-component separation and intra-component connection. Secondly, based on the output of SFT, we apply two classifiers, a text component classifier and a text-line classifier, sequentially to extract text regions, eliminating the heuristic procedures that are commonly used in previous approaches. The two classifiers are built upon two novel Text Covariance Descriptors (TCDs) that encode both the heuristic properties and the statistical characteristics of text stokes. Finally, text regions are located by simply thresholding the text-line confident map. Our method was evaluated on two benchmark datasets: ICDAR 2005 and ICDAR 2011, and the corresponding F-measure values are 0.72 and 0.73, respectively, surpassing previous methods in accuracy by a large margin.",
"title": ""
},
{
"docid": "0c509f98c65a48c31d32c0c510b4c13f",
"text": "An EM based straight forward design and pattern synthesis technique for series fed microstrip patch array antennas is proposed. An optimization of each antenna element (λ/4-transmission line, λ/2-patch, λ/4-transmission line) of the array is performed separately. By introducing an equivalent circuit along with an EM parameter extraction method, each antenna element can be optimized for its resonance frequency and taper amplitude, so to shape the aperture distribution for the cascaded elements. It will be shown that the array design based on the multiplication of element factor and array factor fails in case of patch width tapering, due to the inconsistency of the element patterns. To overcome this problem a line width tapering is suggested which keeps the element patterns nearly constant while still providing a broad amplitude taper range. A symmetric 10 element antenna array with a Chebyshev tapering (-20dB side lobe level) operating at 5.8 GHz has been designed, compared for the two tapering methods and validated with measurement.",
"title": ""
},
{
"docid": "5bb6e93244e976725bc9663c0afe8136",
"text": "Video streaming platforms like Twitch.tv or YouNow have attracted the attention of both users and researchers in the last few years. Users increasingly adopt these platforms to share user-generated videos while researchers study their usage patterns to learn how to provide better and new services.",
"title": ""
}
] | scidocsrr |
c99c16b7a14e22ae7b05c3f30fff9491 | Financial Cryptography and Data Security | [
{
"docid": "f6fc0992624fd3b3e0ce7cc7fc411154",
"text": "Digital currencies are a globally spreading phenomenon that is frequently and also prominently addressed by media, venture capitalists, financial and governmental institutions alike. As exchange prices for Bitcoin have reached multiple peaks within 2013, we pose a prevailing and yet academically unaddressed question: What are users' intentions when changing their domestic into a digital currency? In particular, this paper aims at giving empirical insights on whether users’ interest regarding digital currencies is driven by its appeal as an asset or as a currency. Based on our evaluation, we find strong indications that especially uninformed users approaching digital currencies are not primarily interested in an alternative transaction system but seek to participate in an alternative investment vehicle.",
"title": ""
},
{
"docid": "68c1a1fdd476d04b936eafa1f0bc6d22",
"text": "Smart contracts are computer programs that can be correctly executed by a network of mutually distrusting nodes, without the need of an external trusted authority. Since smart contracts handle and transfer assets of considerable value, besides their correct execution it is also crucial that their implementation is secure against attacks which aim at stealing or tampering the assets. We study this problem in Ethereum, the most well-known and used framework for smart contracts so far. We analyse the security vulnerabilities of Ethereum smart contracts, providing a taxonomy of common programming pitfalls which may lead to vulnerabilities. We show a series of attacks which exploit these vulnerabilities, allowing an adversary to steal money or cause other damage.",
"title": ""
},
{
"docid": "1315247aa0384097f5f9e486bce09bd4",
"text": "We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.",
"title": ""
}
] | [
{
"docid": "37f157cdcd27c1647548356a5194f2bc",
"text": "Purpose – The aim of this paper is to propose a novel evaluation framework to explore the “root causes” that hinder the acceptance of using internal cloud services in a university. Design/methodology/approach – The proposed evaluation framework incorporates the duo-theme DEMATEL (decision making trial and evaluation laboratory) with TAM (technology acceptance model). The operational procedures were proposed and tested on a university during the post-implementation phase after introducing the internal cloud services. Findings – According to the results, clear understanding and operational ease under the theme perceived ease of use (PEOU) are more imperative; whereas improved usefulness and productivity under the theme perceived usefulness (PU) are more urgent to foster the usage of internal clouds in the case university. Research limitations/implications – Based on the findings, some intervention activities were suggested to enhance the level of users’ acceptance of internal cloud solutions in the case university. However, the results should not be generalized to apply to other educational establishments. Practical implications – To reduce the resistance from using internal clouds, some necessary intervention activities such as developing attractive training programs, creating interesting workshops, and rewriting user friendly manual or handbook are recommended. Originality/value – The novel two-theme DEMATEL has greatly contributed to the conventional one-theme DEMATEL theory. The proposed two-theme DEMATEL procedures were the first attempt to evaluate the acceptance of using internal clouds in university. The results have provided manifest root-causes under two distinct themes, which help derive effectual intervention activities to foster the acceptance of usage of internal clouds in a university.",
"title": ""
},
{
"docid": "106915eaac271c255aef1f1390577c64",
"text": "Parking is costly and limited in almost every major city in the world. Innovative parking systems for meeting near-term parking demand are needed. This paper proposes a novel, secure, and intelligent parking system (SmartParking) based on secured wireless network and sensor communication. From the point of users' view, SmartParking is a secure and intelligent parking service. The parking reservation is safe and privacy preserved. The parking navigation is convenient and efficient. The whole parking process will be a non-stop service. From the point of management's view, SmartParking is an intelligent parking system. The parking process can be modeled as birth-death stochastic process and the prediction of revenues can be made. Based on the prediction, new business promotion can be made, for example, on-sale prices and new parking fees. In SmartParking, new promotions can be published through wireless network. We address hardware/software architecture, implementations, and analytical models and results. The evaluation of this proposed system proves its efficiency.",
"title": ""
},
{
"docid": "1d2ffd37c15b41ec5124e4ec4dfbc80c",
"text": "Developing transparent predictive analytics has attracted significant research attention recently. There have been multiple theories on how to model learning transparency but none of them aims to understand the internal and often complicated modeling processes. In this paper we adopt a contemporary philosophical concept called \"constructivism\", which is a theory regarding how human learns. We hypothesize that a critical aspect of transparent machine learning is to \"reveal\" model construction with two key process: (1) the assimilation process where we enhance our existing learning models and (2) the accommodation process where we create new learning models. With this intuition we propose a new learning paradigm, constructivism learning, using a Bayesian nonparametric model to dynamically handle the creation of new learning tasks. Our empirical study on both synthetic and real data sets demonstrate that the new learning algorithm is capable of delivering higher quality models (as compared to base lines and state-of-the-art) and at the same time increasing the transparency of the learning process.",
"title": ""
},
{
"docid": "0cf9ef0e5e406509f35c0dcd7ea598af",
"text": "This paper proposes a method to reduce cogging torque of a single side Axial Flux Permanent Magnet (AFPM) motor according to analysis results of finite element analysis (FEA) method. First, the main cause of generated cogging torque will be studied using three dimensional FEA method. In order to reduce the cogging torque, a dual layer magnet step skewed (DLMSS) method is proposed to determine the shape of dual layer magnets. The skewed angle of magnetic poles between these two layers is determined using equal air gap flux of inner and outer layers. Finally, a single-sided AFPM motor based on the proposed methods is built as experimental platform to verify the effectiveness of the design. Meanwhile, the differences between design and tested results will be analyzed for future research and improvement.",
"title": ""
},
{
"docid": "96bc9c8fa154d8e6cc7d0486c99b43d5",
"text": "A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the output. In the ideal case such structures achieve a voltage gain which equals the number of transmission lines used. To achieve maximum efficiency, mismatch and secondary modes must be suppressed. Here we describe a TLT based on parallel plate transmission lines. The chosen geometry results in a high efficiency, due to good matching and minimized secondary modes. A second advantage of this design is that the electric field strength between the conductors is the same throughout the entire TLT. This makes the design suitable for high voltage applications. To investigate the concept of this TLT design, measurements are done on two different TLT designs. One TLT consists of 4 transmission lines, while the other one has 8 lines. Both designs are constructed of DiBond™. This material consists of a flat polyethylene inner core with an aluminum sheet on both sides. Both TLT's have an input impedance of 3.125 Ω. Their output impedances are 50 and 200 Ω, respectively. The measurements show that, on a matched load, this structure achieves a voltage gain factor of 3.9 when using 4 transmission lines and 7.9 when using 8 lines.",
"title": ""
},
{
"docid": "215d867487afac8ab0641b144f99b312",
"text": "Post-marketing surveillance systems rely on spontaneous reporting databases maintained by health regulators to identify safety issues arising from medicines once they are marketed. Quantitative safety signal detection methods such as Proportional Reporting ratio (PRR), Reporting Odds Ratio (ROR), Bayesian Confidence Propagation Neural Network (BCPNN), and empirical Bayesian technique are applied to spontaneous reporting data to identify safety signals [1-3]. These methods have been adopted as standard quantitative methods by many pharmaco-surveillance centres to screen for safety signals of medicines [2-5]. Studies have validated these methods and showed that the methods have low to moderate sensitivity to detect adverse drug reaction (ADR) signals, ranging between 28% to 56%, while the specificity of the methods ranged from 82% to 95% [6-8].",
"title": ""
},
{
"docid": "86cb3c072e67bed8803892b72297812c",
"text": "Internet of Things (IoT) will comprise billions of devices that can sense, communicate, compute and potentially actuate. Data streams coming from these devices will challenge the traditional approaches to data management and contribute to the emerging paradigm of big data. This paper discusses emerging Internet of Things (IoT) architecture, large scale sensor network applications, federating sensor networks, sensor data and related context capturing techniques, challenges in cloud-based management, storing, archiving and processing of",
"title": ""
},
{
"docid": "81d933a449c0529ab40f5661f3b1afa1",
"text": "Scene classification plays a key role in interpreting the remotely sensed high-resolution images. With the development of deep learning, supervised learning in classification of Remote Sensing with convolutional networks (CNNs) has been frequently adopted. However, researchers paid less attention to unsupervised learning in remote sensing with CNNs. In order to filling the gap, this paper proposes a set of CNNs called Multiple lAyeR feaTure mAtching(MARTA) generative adversarial networks (GANs) to learn representation using only unlabeled data. There will be two models of MARTA GANs involved: (1) a generative model G that captures the data distribution and provides more training data; (2) a discriminative model D that estimates the possibility that a sample came from the training data rather than G and in this way a well-formed representation of dataset can be learned. Therefore, MARTA GANs obtain the state-of-the-art results which outperform the results got from UC-Merced Land-use dataset and Brazilian Coffee Scenes dataset.",
"title": ""
},
{
"docid": "513b378c3fc2e2e6f23a406b63dc33a9",
"text": "Mining frequent itemsets from the large transactional database is a very critical and important task. Many algorithms have been proposed from past many years, But FP-tree like algorithms are considered as very effective algorithms for efficiently mine frequent item sets. These algorithms considered as efficient because of their compact structure and also for less generation of candidates itemsets compare to Apriori and Apriori like algorithms. Therefore this paper aims to presents a basic Concepts of some of the algorithms (FP-Growth, COFI-Tree, CT-PRO) based upon the FPTree like structure for mining the frequent item sets along with their capabilities and comparisons.",
"title": ""
},
{
"docid": "f3c1ad1431d3aced0175dbd6e3455f39",
"text": "BACKGROUND\nMethylxanthine therapy is commonly used for apnea of prematurity but in the absence of adequate data on its efficacy and safety. It is uncertain whether methylxanthines have long-term effects on neurodevelopment and growth.\n\n\nMETHODS\nWe randomly assigned 2006 infants with birth weights of 500 to 1250 g to receive either caffeine or placebo until therapy for apnea of prematurity was no longer needed. The primary outcome was a composite of death, cerebral palsy, cognitive delay (defined as a Mental Development Index score of <85 on the Bayley Scales of Infant Development), deafness, or blindness at a corrected age of 18 to 21 months.\n\n\nRESULTS\nOf the 937 infants assigned to caffeine for whom adequate data on the primary outcome were available, 377 (40.2%) died or survived with a neurodevelopmental disability, as compared with 431 of the 932 infants (46.2%) assigned to placebo for whom adequate data on the primary outcome were available (odds ratio adjusted for center, 0.77; 95% confidence interval [CI], 0.64 to 0.93; P=0.008). Treatment with caffeine as compared with placebo reduced the incidence of cerebral palsy (4.4% vs. 7.3%; adjusted odds ratio, 0.58; 95% CI, 0.39 to 0.87; P=0.009) and of cognitive delay (33.8% vs. 38.3%; adjusted odds ratio, 0.81; 95% CI, 0.66 to 0.99; P=0.04). The rates of death, deafness, and blindness and the mean percentiles for height, weight, and head circumference at follow-up did not differ significantly between the two groups.\n\n\nCONCLUSIONS\nCaffeine therapy for apnea of prematurity improves the rate of survival without neurodevelopmental disability at 18 to 21 months in infants with very low birth weight. (ClinicalTrials.gov number, NCT00182312 [ClinicalTrials.gov].).",
"title": ""
},
{
"docid": "352bcf1c407568871880ad059053e1ec",
"text": "In this paper we present a novel system for sketching the motion of a character. The process begins by sketching a character to be animated. An animated motion is then created for the character by drawing a continuous sequence of lines, arcs, and loops. These are parsed and mapped to a parameterized set of output motions that further reflect the location and timing of the input sketch. The current system supports a repertoire of 18 different types of motions in 2D and a subset of these in 3D. The system is unique in its use of a cursive motion specification, its ability to allow for fast experimentation, and its ease of use for non-experts.",
"title": ""
},
{
"docid": "bff34a024324774d28ccaa23722e239e",
"text": "We review the Philippine frogs of the genus Leptobrachuim. All previous treatments have referred Philippine populations to L. hasseltii, a species we restrict to Java and Bali, Indonesia. We use external morphology, body proportions, color pattern, advertisement calls, and phylogenetic analysis of molecular sequence data to show that Philippine populations of Leptobrachium represent three distinct and formerly unrecognized evolutionary lineages, and we describe each (populations on Mindoro, Palawan, and Mindanao Island groups) as new species. Our findings accentuate the degree to which the biodiversity of Philippine amphibians is currently underestimated and in need of comprehensive review with new and varied types of data. LAGOM: Pinagbalik aralan namin ang mga palaka sa Pilipinas mula sa genus Leptobrachium. Ang nakaraang mga palathala ay tumutukoy sa populasyon ng L. hasseltii, ang uri ng palaka na aming tinakda lamang sa Java at Bali, Indonesia. Ginamit namin ang panglabas na morpolohiya, proporsiyon ng pangangatawan, kulay disenyo, pantawag pansin, at phylogenetic na pagsusuri ng molekular na pagkakasunod-sunod ng datos upang maipakita na ang populasyon sa Pilipinas ng Leptobrachium ay kumakatawan sa tatlong natatangi at dating hindi pa nakilalang ebolusyonaryong lipi. Inilalarawan din naming ang bawat isa (populasyon sa Mindoro, Palawan, at mga grupo ng isla sa Mindanao) na bagong uri ng palaka. Ang aming natuklasan ay nagpapatingkad sa antas kung saan ang biodibersidad ng amphibians sa Pilipinas sa kasalukuyan ay may mababang pagtatantya at nangangailangan ng malawakang pagbabalik-aral ng mga bago at iba’t ibang uri ng",
"title": ""
},
{
"docid": "472f5def60d3cb1be23f63a78f84080e",
"text": "In financial terms, a business strategy is much more like a series of options than like a single projected cash flow. Executing a strategy almost always involves making a sequence of major decisions. Some actions are taken immediately while others are deliberately deferred so that managers can optimize their choices as circumstances evolve. While executives readily grasp the analogy between strategy and real options, until recently the mechanics of option pricing was so complex that few companies found it practical to use when formulating strategy. But advances in both computing power and our understanding of option pricing over the last 20 years now make it feasible to apply real-options thinking to strategic decision making. To analyze a strategy as a portfolio of related real options, this article exploits a framework presented by the author in \"Investment Opportunities as Real Options: Getting Started on the Numbers\" (HBR July-August 1998). That article explained how to get from discounted-cash-flow value to option value for a typical project; in other words, it was about reaching a number. This article extends that framework, exploring how, once you've worked out the numbers, you can use option pricing to improve decision making about the sequence and timing of a portfolio of strategic investments. Timothy Luehrman shows executives how to plot their strategies in two-dimensional \"option space,\" giving them a way to \"draw\" a strategy in terms that are neither wholly strategic nor wholly financial, but some of both. Such pictures inject financial discipline and new insight into how a company's future opportunities can be actively cultivated and harvested.",
"title": ""
},
{
"docid": "d7ff935c38f2adad660ba580e6f3bc6c",
"text": "In this report, we provide a comparative analysis of different techniques for user intent classification towards the task of app recommendation. We analyse the performance of different models and architectures for multi-label classification over a dataset with a relative large number of classes and only a handful examples of each class. We focus, in particular, on memory network architectures, and compare how well the different versions perform under the task constraints. Since the classifier is meant to serve as a module in a practical dialog system, it needs to be able to work with limited training data and incorporate new data on the fly. We devise a 1-shot learning task to test the models under the above constraint. We conclude that relatively simple versions of memory networks perform better than other approaches. Although, for tasks with very limited data, simple non-parametric methods perform comparably, without needing the extra training data.",
"title": ""
},
{
"docid": "ffd04d534aefbfb00879fed5c8480dd7",
"text": "This paper deals with the mechanical construction and static strength analysis of an axial flux permanent magnet machine with segmented armature torus topology, which consists of two external rotors and an inner stator. In order to conduct the three dimensional magnetic flux, the soft magnetic composites is used to manufacture the stator segments and the rotor yoke. On the basis of the detailed electromagnetic analysis, the main geometric dimensions of the machine are determined, which is also the precondition of the mechanical construction. Through the application of epoxy with high thermal conductivity and high mechanical strength, the independent segments of the stator are bounded together with the liquid-cooling system, which makes a high electrical load possible. Due to the unavoidable errors in the manufacturing and montage, there might be large force between the rotors and the stator. Thus, the rotor is held with a rotor carrier made from aluminum alloy with high elastic modulus and the form of the rotor carrier is optimized, in order to reduce the axial deformation. In addition, the shell and the shaft are designed and the choice of bearings is discussed. Finally, the strain and deformation of different parts are analyzed with the help of finite element method to validate the mechanical construction.",
"title": ""
},
{
"docid": "ec6bfc49858a2a4ae3c8122fad68d437",
"text": "A major aim of the current study was to determine what classroom teachers perceived to be the greatest barriers affecting their capacity to deliver successful physical education (PE) programs. An additional aim was to examine the impact of these barriers on the type and quality of PE programs delivered. This study applied a mixed-mode design involving data source triangulation using semistructured interviews with classroom teachers (n = 31) and teacher-completed questionnaires (n = 189) from a random sample of 38 schools. Results identified the key factors inhibiting PE teachers, which were categorized as teacher-related or institutional. Interestingly, the five greatest barriers were defined as institutional or out of the teacher's control. The major adverse effects of these barriers were evident in reduced time spent teaching PE and delivering PE lessons of questionable quality.",
"title": ""
},
{
"docid": "a92aa1ea6faf19a2257dce1dda9cd0d0",
"text": "This paper introduces a novel content-adaptive image downscaling method. The key idea is to optimize the shape and locations of the downsampling kernels to better align with local image features. Our content-adaptive kernels are formed as a bilateral combination of two Gaussian kernels defined over space and color, respectively. This yields a continuum ranging from smoothing to edge/detail preserving kernels driven by image content. We optimize these kernels to represent the input image well, by finding an output image from which the input can be well reconstructed. This is technically realized as an iterative maximum-likelihood optimization using a constrained variation of the Expectation-Maximization algorithm. In comparison to previous downscaling algorithms, our results remain crisper without suffering from ringing artifacts. Besides natural images, our algorithm is also effective for creating pixel art images from vector graphics inputs, due to its ability to keep linear features sharp and connected.",
"title": ""
},
{
"docid": "64d711b609fb683b5679ed9f4a42275c",
"text": "We address the problem of image feature learning for the applications where multiple factors exist in the image generation process and only some factors are of our interest. We present a novel multi-task adversarial network based on an encoder-discriminator-generator architecture. The encoder extracts a disentangled feature representation for the factors of interest. The discriminators classify each of the factors as individual tasks. The encoder and the discriminators are trained cooperatively on factors of interest, but in an adversarial way on factors of distraction. The generator provides further regularization on the learned feature by reconstructing images with shared factors as the input image. We design a new optimization scheme to stabilize the adversarial optimization process when multiple distributions need to be aligned. The experiments on face recognition and font recognition tasks show that our method outperforms the state-of-the-art methods in terms of both recognizing the factors of interest and generalization to images with unseen variations.",
"title": ""
},
{
"docid": "4e50e68e099ab77aedcb0abe8b7a9ca2",
"text": "In the downlink transmission scenario, power allocation and beamforming design at the transmitter are essential when using multiple antenna arrays. This paper considers a multiple input–multiple output broadcast channel to maximize the weighted sum-rate under the total power constraint. The classical weighted minimum mean-square error (WMMSE) algorithm can obtain suboptimal solutions but involves high computational complexity. To reduce this complexity, we propose a fast beamforming design method using unsupervised learning, which trains the deep neural network (DNN) offline and provides real-time service online only with simple neural network operations. The training process is based on an end-to-end method without labeled samples avoiding the complicated process of obtaining labels. Moreover, we use the “APoZ”-based pruning algorithm to compress the network volume, which further reduces the computational complexity and volume of the DNN, making it more suitable for low computation-capacity devices. Finally, the experimental results demonstrate that the proposed method improves computational speed significantly with performance close to the WMMSE algorithm.",
"title": ""
},
{
"docid": "fb0e9f6f58051b9209388f81e1d018ff",
"text": "Because many databases contain or can be embellished with structural information, a method for identifying interesting and repetitive substructures is an essential component to discovering knowledge in such databases. This paper describes the SUBDUE system, which uses the minimum description length (MDL) principle to discover substructures that compress the database and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. Inclusion of background knowledgeguides SUBDUE toward appropriate substructures for a particular domain or discovery goal, and the use of an inexact graph match allows a controlled amount of deviations in the instance of a substructure concept. We describe the application of SUBDUE to a variety of domains. We also discuss approaches to combining SUBDUE with non-structural discovery systems.",
"title": ""
}
] | scidocsrr |
920c43b711d430a0095d54456bf40d2f | Real-Time Lane Departure Warning System on a Lower Resource Platform | [
{
"docid": "be283056a8db3ab5b2481f3dc1f6526d",
"text": "Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.",
"title": ""
}
] | [
{
"docid": "c504800ce08654fb5bf49356d2f7fce3",
"text": "Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction.",
"title": ""
},
{
"docid": "5b91e467d87f42fa6ca352a09b44cc48",
"text": "We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the wellknown 1-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn commonacross-tasks representations and in the former step we learn task-specific functions using these representations. We report experiments on a simulated and a real data set which demonstrate that the proposed method dramatically improves the performance relative to learning each task independently. Our algorithm can also be used, as a special case, to simply select – not learn – a few common features across the tasks.",
"title": ""
},
{
"docid": "b9e8007220be2887b9830c05c283f8a5",
"text": "INTRODUCTION\nHealth-care professionals are trained health-care providers who occupy a potential vanguard position in human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) prevention programs and the management of AIDS patients. This study was performed to assess HIV/AIDS-related knowledge, attitude, and practice (KAP) and perceptions among health-care professionals at a tertiary health-care institution in Uttarakhand, India, and to identify the target group where more education on HIV is needed.\n\n\nMATERIALS AND METHODS\nA cross-sectional KAP survey was conducted among five groups comprising consultants, residents, medical students, laboratory technicians, and nurses. Probability proportional to size sampling was used for generating random samples. Data analysis was performed using charts and tables in Microsoft Excel 2016, and statistical analysis was performed using the Statistical Package for the Social Science software version 20.0.\n\n\nRESULTS\nMost participants had incomplete knowledge regarding the various aspects of HIV/AIDS. Attitude in all the study groups was receptive toward people living with HIV/AIDS. Practical application of knowledge was best observed in the clinicians as well as medical students. Poor performance by technicians and nurses was observed in prevention and prophylaxis. All groups were well informed about the National AIDS Control Policy except technicians.\n\n\nCONCLUSION\nPoor knowledge about HIV infection, particularly among the young medical students and paramedics, is evidence of the lacunae in the teaching system, which must be kept in mind while formulating teaching programs. As suggested by the respondents, Information Education Communication activities should be improvised making use of print, electronic, and social media along with interactive awareness sessions, regular continuing medical educations, and seminars to ensure good quality of safe modern medical care.",
"title": ""
},
{
"docid": "6aaabe17947bc455d940047745ed7962",
"text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.",
"title": ""
},
{
"docid": "2aa885b2b531d4035a25928c242ad2ca",
"text": "Doxorubicin (Dox) is a cytotoxic drug widely incorporated in various chemotherapy protocols. Severe side effects such as cardiotoxicity, however, limit Dox application. Mechanisms by which Dox promotes cardiac damage and cardiomyocyte cell death have been investigated extensively, but a definitive picture has yet to emerge. Autophagy, regarded generally as a protective mechanism that maintains cell viability by recycling unwanted and damaged cellular constituents, is nevertheless subject to dysregulation having detrimental effects for the cell. Autophagic cell death has been described, and has been proposed to contribute to Dox-cardiotoxicity. Additionally, mitophagy, autophagic removal of damaged mitochondria, is affected by Dox in a manner contributing to toxicity. Here we will review Dox-induced cardiotoxicity and cell death in the broad context of the autophagy and mitophagy processes.",
"title": ""
},
{
"docid": "b0bd9a0b3e1af93a9ede23674dd74847",
"text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.",
"title": ""
},
{
"docid": "e0f48803b24826cbcf897c062dc512b7",
"text": "Performance of the support vector machine strongly depends on parameters settings. One of the most common algorithms for parameter tuning is grid search, combined with cross validation. This algorithm is often time consuming and inaccurate. In this paper we propose the use of stochastic metaheuristic algorithm, firefly algorithm, for effective support vector machine parameter tuning. The experimental results on 13 standard benchmark datasets show that our proposed method achieve better results compared to other state-of-the-art algorithms from literature.",
"title": ""
},
{
"docid": "3ba0b5d6be06f65cd2048e054eae4d7d",
"text": "Figure 1.1 shows the basic construction of a 3D graphics computer. That is also the general organization of this book, with each block more or less representing a chapter (there is no chapter on memory, but memory is discussed in multiple chapters). The book traces the earliest understanding of 3D and then the foundational mathematics to explain and construct 3D. From there we follow the history of the computer, beginning with mechanical computers, and ending up with tablets. Next, we present the amazing computer graphics (CG) algorithms and tricks, and it’s difficult to tell the story because there were a couple of periods where eruptions of new ideas and techniques seem to occur all at once. With the fundamentals of how to draw lines and create realistic images better understood, the applications that exploited those foundations. The applications of course can’t do the work by themselves and so the following chapter is on the 3D controllers that drive the display. The chapter that logically follows that is on the development of the displays, and a chapter follows that on stereovision.",
"title": ""
},
{
"docid": "74b8fd7767f1d08563103a13ad0247b7",
"text": "The segmentation of moving objects become challenging when the object motion is small, the shape of object changes, and there is global background motion in unconstrained videos. In this paper, we propose a fully automatic, efficient, fast and composite framework to segment the moving object on the basis of saliency, locality, color and motion cues. First, we propose a new saliency measure to predict the potential salient regions. In the second step, we use the RANSAC homography and optical flow to compensate the background motion and get reliable motion information, called motion cues. Furthermore, the saliency information and motion cues are combined to get the initial segmented object (seeded region). A refinement is performed to remove the unwanted noisy details and expand the seeded region to the whole object. Detailed experimentation is carried out on challenging video benchmarks to evaluate the performance of the proposed method. The results show that the proposed method is faster and performs better than state-of-the-art approaches.",
"title": ""
},
{
"docid": "7a10f559d9bbf1b6853ff6b89f5857f7",
"text": "Despite the much-ballyhooed increase in outsourcing, most companies are in do-it-yourself mode for the bulk of their processes, in large part because there's no way to compare outside organizations' capabilities with those of internal functions. Given the lack of comparability, it's almost surprising that anyone outsources today. But it's not surprising that cost is by far companies' primary criterion for evaluating outsourcers or that many companies are dissatisfied with their outsourcing relationships. A new world is coming, says the author, and it will lead to dramatic changes in the shape and structure of corporations. A broad set of process standards will soon make it easy to determine whether a business capability can be improved by outsourcing it. Such standards will also help businesses compare service providers and evaluate the costs versus the benefits of outsourcing. Eventually these costs and benefits will be so visible to buyers that outsourced processes will become a commodity, and prices will drop significantly. The low costs and low risk of outsourcing will accelerate the flow of jobs offshore, force companies to reassess their strategies, and change the basis of competition. The speed with which some businesses have already adopted process standards suggests that many previously unscrutinized areas are ripe for change. In the field of technology, for instance, the Carnegie Mellon Software Engineering Institute has developed a global standard for software development processes, called the Capability Maturity Model (CMM). For companies that don't have process standards in place, it makes sense for them to create standards by working with customers, competitors, software providers, businesses that processes may be outsourced to, and objective researchers and standard-setters. Setting standards is likely to lead to the improvement of both internal and outsourced processes.",
"title": ""
},
{
"docid": "329f6c340218e7ecd62c93a1e7ff727a",
"text": "To enhance video streaming experience for mobile users, we propose an approach towards Quality-of-Experience (QoE) aware on-the-fly transcoding. The proposed approach relies on the concept of Mobile Edge Computing (MEC) as a key enabler in enhancing service quality. Our scheme involves an autonomic creation of a transcoding service as a Virtual Network Function (VNF) and ensures dynamic rate switching of the streamed video to maintain the desirable quality. This edge-assistive transcoding and adaptive streaming results in reduced computational loads and reduced core network traffic. The proposed solution represents a complete miniature content delivery network infrastructure on the edge, ensuring reduced latency and better quality of experience",
"title": ""
},
{
"docid": "d852b0b89a748086a74d43adbf1ac867",
"text": "Community-based question-answering (CQA) services contribute to solving many difficult questions we have. For each question in such services, one best answer can be designated, among all answers, often by the asker. However, many questions on typical CQA sites are left without a best answer even if when good candidates are available. In this paper, we attempt to address the problem of predicting if an answer may be selected as the best answer, based on learning from labeled data. The key tasks include designing features measuring important aspects of an answer and identifying the most importance features. Experiments with a Stack Overflow dataset show that the contextual information among the answers should be the most important factor to consider.",
"title": ""
},
{
"docid": "0c9228dd4a65587e43fc6d2d1f0b03ce",
"text": "Secure multi-party computation (MPC) is a technique well suited for privacy-preserving data mining. Even with the recent progress in two-party computation techniques such as fully homomorphic encryption, general MPC remains relevant as it has shown promising performance metrics in real-world benchmarks. Sharemind is a secure multi-party computation framework designed with real-life efficiency in mind. It has been applied in several practical scenarios, and from these experiments, new requirements have been identified. Firstly, large datasets require more efficient protocols for standard operations such as multiplication and comparison. Secondly, the confidential processing of financial data requires the use of more complex primitives, including a secure division operation. This paper describes new protocols in the Sharemind model for secure multiplication, share conversion, equality, bit shift, bit extraction, and division. All the protocols are implemented and benchmarked, showing that the current approach provides remarkable speed improvements over the previous work. This is verified using real-world benchmarks for both operations and algorithms.",
"title": ""
},
{
"docid": "de3aee8ca694d59eb0ef340b3b1c8161",
"text": "In recent years, organisations have begun to realise the importance of knowing their customers better. Customer relationship management (CRM) is an approach to managing customer related knowledge of increasing strategic significance. The successful adoption of IT-enabled CRM redefines the traditional models of interaction between businesses and their customers, both nationally and globally. It is regarded as a source for competitive advantage because it enables organisations to explore and use knowledge of their customers and to foster profitable and long-lasting one-to-one relationships. This paper discusses the results of an exploratory survey conducted in the UK financial services sector; it discusses CRM practice and expectations, the motives for implementing it, and evaluates post-implementation experiences. It also investigates the CRM tools functionality in the strategic, process, communication, and business-to-customer (B2C) organisational context and reports the extent of their use. The results show that despite the anticipated potential, the benefits from such tools are rather small. # 2004 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "35c59e626d2d98f273d1978048f6436a",
"text": "OBJECTIVE\nTo evaluate the prevalence, type and severity of prescribing errors observed between grades of prescriber, ward area, admission or discharge and type of medication prescribed.\n\n\nDESIGN\nWard-based clinical pharmacists prospectively documented prescribing errors at the point of clinically checking admission or discharge prescriptions. Error categories and severities were assigned at the point of data collection, and verified independently by the study team.\n\n\nSETTING\nProspective study of nine diverse National Health Service hospitals in North West England, including teaching hospitals, district hospitals and specialist services for paediatrics, women and mental health.\n\n\nRESULTS\nOf 4238 prescriptions evaluated, one or more error was observed in 1857 (43.8%) prescriptions, with a total of 3011 errors observed. Of these, 1264 (41.9%) were minor, 1629 (54.1%) were significant, 109 (3.6%) were serious and 9 (0.30%) were potentially life threatening. The majority of errors considered to be potentially lethal (n=9) were dosing errors (n=8), mostly relating to overdose (n=7). The rate of error was not significantly different between newly qualified doctors compared with junior, middle grade or senior doctors. Multivariable analyses revealed the strongest predictor of error was the number of items on a prescription (risk of error increased 14% for each additional item). We observed a high rate of error from medication omission, particularly among patients admitted acutely into hospital. Electronic prescribing systems could potentially have prevented up to a quarter of (but not all) errors.\n\n\nCONCLUSIONS\nIn contrast to other studies, prescriber experience did not impact on overall error rate (although there were qualitative differences in error category). Given that multiple drug therapies are now the norm for many medical conditions, health systems should introduce and retain safeguards which detect and prevent error, in addition to continuing training and education, and migration to electronic prescribing systems.",
"title": ""
},
{
"docid": "ead93ea218664f371de64036e1788aa5",
"text": "OBJECTIVE\nTo assess the diagnostic efficacy of the first-trimester anomaly scan including first-trimester fetal echocardiography as a screening procedure in a 'medium-risk' population.\n\n\nMETHODS\nIn a prospective study, we evaluated 3094 consecutive fetuses with a crown-rump length (CRL) of 45-84 mm and gestational age between 11 + 0 and 13 + 6 weeks, using transabdominal and transvaginal ultrasonography. The majority of patients were referred without prior abnormal scan or increased nuchal translucency (NT) thickness, the median maternal age was, however, 35 (range, 15-46) years, and 53.8% of the mothers (1580/2936) were 35 years or older. This was therefore a self-selected population reflecting an increased percentage of older mothers opting for prenatal diagnosis. The follow-up rate was 92.7% (3117/3363).\n\n\nRESULTS\nThe prevalence of major abnormalities in 3094 fetuses was 2.8% (86/3094). The detection rate of major anomalies at the 11 + 0 to 13 + 6-week scan was 83.7% (72/86), 51.9% (14/27) for NT < 2.5 mm and 98.3% (58/59) for NT >or= 2.5 mm. The prevalence of major congenital heart defects (CHD) was 1.2% (38/3094). The detection rate of major CHD at the 11 to 13 + 6-week scan was 84.2% (32/38), 37.5% (3/8) for NT < 2.5 mm and 96.7% (29/30) for NT >or= 2.5 mm.\n\n\nCONCLUSION\nThe overall detection rate of fetal anomalies including fetal cardiac defects following a specialist scan at 11 + 0 to 13 + 6 weeks' gestation is about 84% and is increased when NT >or= 2.5 mm. This extends the possibilities of a first-trimester scan beyond risk assessment for fetal chromosomal defects. In experienced hands with adequate equipment, the majority of severe malformations as well as major CHD may be detected at the end of the first trimester, which offers parents the option of deciding early in pregnancy how to deal with fetuses affected by genetic or structural abnormalities without pressure of time.",
"title": ""
},
{
"docid": "fd472acf79142719a20862deab9c1302",
"text": "Gesture recognition has lured everyone's attention as a new generation of HCI and visual input mode. FPGA presents a better overall performance and flexibility than DSP for parallel processing and pipelined operations in order to process high resolution and high frame rate video processing. Vision-based gesture recognition technique is the best way to recognize the gesture. In gesture recognition, the image acquisition and image segmentation is there. In this paper, the image acquisition is shown and also the image segmentation techniques are discussed. In this, to capture the gesture the OV7670 CMOS camera chip sensor is used that is attached to FPGA DE-1 board. By using this gesture recognition, we can control any application in a non-tangible way.",
"title": ""
},
{
"docid": "40dc2dc28dca47137b973757cdf3bf34",
"text": "In this paper we propose a new word-order based graph representation for text. In our graph representation vertices represent words or phrases and edges represent relations between contiguous words or phrases. The graph representation also includes dependency information. Our text representation is suitable for applications involving the identification of relevance or paraphrases across texts, where word-order information would be useful. We show that this word-order based graph representation performs better than a dependency tree representation while identifying the relevance of one piece of text to another.",
"title": ""
},
{
"docid": "dba5777004cf43d08a58ef3084c25bd3",
"text": "This paper investigates the problem of automatic humour recognition, and provides and in-depth analysis of two of the most frequently observ ed features of humorous text: human-centeredness and negative polarity. T hrough experiments performed on two collections of humorous texts, we show that th ese properties of verbal humour are consistent across different data s ets.",
"title": ""
},
{
"docid": "c4346bf13f8367fe3046ab280ac94183",
"text": "Human world is becoming more and more dependent on computers and information technology (IT). The autonomic capabilities in computers and IT have become the need of the day. These capabilities in software and systems increase performance, accuracy, availability and reliability with less or no human intervention (HI). Database has become the integral part of information system in most of the organizations. Databases are growing w.r.t size, functionality, heterogeneity and due to this their manageability needs more attention. Autonomic capabilities in Database Management Systems (DBMSs) are also essential for ease of management, cost of maintenance and hide the low level complexities from end users. With autonomic capabilities administrators can perform higher-level tasks. The DBMS that has the ability to manage itself according to the environment and resources without any human intervention is known as Autonomic DBMS (ADBMS). The paper explores and analyzes the autonomic components of Oracle by considering autonomic characteristics. This analysis illustrates how different components of Oracle manage itself autonomically. The research is focused to find and earmark those areas in Oracle where the human intervention is required. We have performed the same type of research over Microsoft SQL Server and DB2 [1, 2]. A comparison of autonomic components of Oracle with SQL Server is provided to show their autonomic status.",
"title": ""
}
] | scidocsrr |
7fc29ec2bc79dc5585700832729bd45e | Box 1 . Common neural network models Neuron | [
{
"docid": "897a6d208785b144b5d59e4f346134cd",
"text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.",
"title": ""
},
{
"docid": "a6d6ce2e09b866ec546e105aca1fbd1f",
"text": "Most Type 1 diabetes mellitus (T1DM) patients have hypoglycemia problem. Low blood glucose, also known as hypoglycemia, can be a dangerous and can result in unconsciousness, seizures and even death. In recent studies, heart rate (HR) and correct QT interval (QTc) of the electrocardiogram (ECG) signal are found as the most common physiological parameters to be effected from hypoglycemic reaction. In this paper, a state-of-the-art intelligent technology namely deep belief network (DBN) is developed as an intelligent diagnostics system to recognize the onset of hypoglycemia. The proposed DBN provides a superior classification performance with feature transformation on either processed or un-processed data. To illustrate the effectiveness of the proposed hypoglycemia detection system, 15 children with Type 1 diabetes were volunteered overnight. Comparing with several existing methodologies, the experimental results showed that the proposed DBN outperformed and achieved better classification performance.",
"title": ""
}
] | [
{
"docid": "06b6f659fe422410d65081735ad2d16a",
"text": "BACKGROUND\nImproving survival and extending the longevity of life for all populations requires timely, robust evidence on local mortality levels and trends. The Global Burden of Disease 2015 Study (GBD 2015) provides a comprehensive assessment of all-cause and cause-specific mortality for 249 causes in 195 countries and territories from 1980 to 2015. These results informed an in-depth investigation of observed and expected mortality patterns based on sociodemographic measures.\n\n\nMETHODS\nWe estimated all-cause mortality by age, sex, geography, and year using an improved analytical approach originally developed for GBD 2013 and GBD 2010. Improvements included refinements to the estimation of child and adult mortality and corresponding uncertainty, parameter selection for under-5 mortality synthesis by spatiotemporal Gaussian process regression, and sibling history data processing. We also expanded the database of vital registration, survey, and census data to 14 294 geography-year datapoints. For GBD 2015, eight causes, including Ebola virus disease, were added to the previous GBD cause list for mortality. We used six modelling approaches to assess cause-specific mortality, with the Cause of Death Ensemble Model (CODEm) generating estimates for most causes. We used a series of novel analyses to systematically quantify the drivers of trends in mortality across geographies. First, we assessed observed and expected levels and trends of cause-specific mortality as they relate to the Socio-demographic Index (SDI), a summary indicator derived from measures of income per capita, educational attainment, and fertility. Second, we examined factors affecting total mortality patterns through a series of counterfactual scenarios, testing the magnitude by which population growth, population age structures, and epidemiological changes contributed to shifts in mortality. Finally, we attributed changes in life expectancy to changes in cause of death. We documented each step of the GBD 2015 estimation processes, as well as data sources, in accordance with Guidelines for Accurate and Transparent Health Estimates Reporting (GATHER).\n\n\nFINDINGS\nGlobally, life expectancy from birth increased from 61·7 years (95% uncertainty interval 61·4-61·9) in 1980 to 71·8 years (71·5-72·2) in 2015. Several countries in sub-Saharan Africa had very large gains in life expectancy from 2005 to 2015, rebounding from an era of exceedingly high loss of life due to HIV/AIDS. At the same time, many geographies saw life expectancy stagnate or decline, particularly for men and in countries with rising mortality from war or interpersonal violence. From 2005 to 2015, male life expectancy in Syria dropped by 11·3 years (3·7-17·4), to 62·6 years (56·5-70·2). Total deaths increased by 4·1% (2·6-5·6) from 2005 to 2015, rising to 55·8 million (54·9 million to 56·6 million) in 2015, but age-standardised death rates fell by 17·0% (15·8-18·1) during this time, underscoring changes in population growth and shifts in global age structures. The result was similar for non-communicable diseases (NCDs), with total deaths from these causes increasing by 14·1% (12·6-16·0) to 39·8 million (39·2 million to 40·5 million) in 2015, whereas age-standardised rates decreased by 13·1% (11·9-14·3). Globally, this mortality pattern emerged for several NCDs, including several types of cancer, ischaemic heart disease, cirrhosis, and Alzheimer's disease and other dementias. By contrast, both total deaths and age-standardised death rates due to communicable, maternal, neonatal, and nutritional conditions significantly declined from 2005 to 2015, gains largely attributable to decreases in mortality rates due to HIV/AIDS (42·1%, 39·1-44·6), malaria (43·1%, 34·7-51·8), neonatal preterm birth complications (29·8%, 24·8-34·9), and maternal disorders (29·1%, 19·3-37·1). Progress was slower for several causes, such as lower respiratory infections and nutritional deficiencies, whereas deaths increased for others, including dengue and drug use disorders. Age-standardised death rates due to injuries significantly declined from 2005 to 2015, yet interpersonal violence and war claimed increasingly more lives in some regions, particularly in the Middle East. In 2015, rotaviral enteritis (rotavirus) was the leading cause of under-5 deaths due to diarrhoea (146 000 deaths, 118 000-183 000) and pneumococcal pneumonia was the leading cause of under-5 deaths due to lower respiratory infections (393 000 deaths, 228 000-532 000), although pathogen-specific mortality varied by region. Globally, the effects of population growth, ageing, and changes in age-standardised death rates substantially differed by cause. Our analyses on the expected associations between cause-specific mortality and SDI show the regular shifts in cause of death composition and population age structure with rising SDI. Country patterns of premature mortality (measured as years of life lost [YLLs]) and how they differ from the level expected on the basis of SDI alone revealed distinct but highly heterogeneous patterns by region and country or territory. Ischaemic heart disease, stroke, and diabetes were among the leading causes of YLLs in most regions, but in many cases, intraregional results sharply diverged for ratios of observed and expected YLLs based on SDI. Communicable, maternal, neonatal, and nutritional diseases caused the most YLLs throughout sub-Saharan Africa, with observed YLLs far exceeding expected YLLs for countries in which malaria or HIV/AIDS remained the leading causes of early death.\n\n\nINTERPRETATION\nAt the global scale, age-specific mortality has steadily improved over the past 35 years; this pattern of general progress continued in the past decade. Progress has been faster in most countries than expected on the basis of development measured by the SDI. Against this background of progress, some countries have seen falls in life expectancy, and age-standardised death rates for some causes are increasing. Despite progress in reducing age-standardised death rates, population growth and ageing mean that the number of deaths from most non-communicable causes are increasing in most countries, putting increased demands on health systems.\n\n\nFUNDING\nBill & Melinda Gates Foundation.",
"title": ""
},
{
"docid": "cbda9744930c6d7282bca3f0083da8a3",
"text": "Open Information Extraction extracts relations from text without requiring a pre-specified domain or vocabulary. While existing techniques have used only shallow syntactic features, we investigate the use of semantic role labeling techniques for the task of Open IE. Semantic role labeling (SRL) and Open IE, although developed mostly in isolation, are quite related. We compare SRL-based open extractors, which perform computationally expensive, deep syntactic analysis, with TextRunner, an open extractor, which uses shallow syntactic analysis but is able to analyze many more sentences in a fixed amount of time and thus exploit corpus-level statistics. Our evaluation answers questions regarding these systems, including, can SRL extractors, which are trained on PropBank, cope with heterogeneous text found on the Web? Which extractor attains better precision, recall, f-measure, or running time? How does extractor performance vary for binary, n-ary and nested relations? How much do we gain by running multiple extractors? How do we select the optimal extractor given amount of data, available time, types of extractions desired?",
"title": ""
},
{
"docid": "963b6b2b337541fd741d31b2c8addc8d",
"text": "I. Unary terms • Body part detection candidates • Capture distribution of scores over all part classes II. Pairwise terms • Capture part relationships within/across people – proximity: same body part class (c = c) – kinematic relations: different part classes (c!= c) III. Integer Linear Program (ILP) • Substitute zdd cc = xdc xd c ydd ′ to linearize objective • NP-Hard problem solved via branch-and-cut (1% gap) • Linear constraints on 0/1 labelings: plausible poses – uniqueness",
"title": ""
},
{
"docid": "df04a11d82e8ccf8ea5af180f77bc5f3",
"text": "More and more cities are looking for service providers able to deliver 3D city models in a short time. Airborne laser scanning techniques make it possible to acquire a three-dimensional point cloud leading almost instantaneously to digital surface models (DSM), but these models are far from a topological 3D model needed by geographers or land surveyors. The aim of this paper is to present the pertinence and advantages of combining simultaneously the point cloud and the normalized DSM (nDSM) in the main steps of a building reconstruction approach. This approach has been implemented in order to exempt any additional data and to automate the process. The proposed workflow firstly extracts the off-terrain mask based on DSM. Then, it combines the point cloud and the DSM for extracting a building mask from the off-terrain. At last, based on the previously extracted building mask, the reconstruction of 3D flat roof models is carried out and analyzed.",
"title": ""
},
{
"docid": "0ca703e4379b89bd79b1c33d6cc0ce3e",
"text": "PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently, the deep neural networks have been widely and successfully used in computer vision tasks and attracted growing interests in medical imaging. In this paper, we trained a deep residual convolutional neural network to improve PET image quality by using the existing inter-patient information. An innovative feature of the proposed method is that we embed the neural network in the iterative reconstruction framework for image representation, rather than using it as a post-processing tool. We formulate the objective function as a constrained optimization problem and solve it using the alternating direction method of multipliers algorithm. Both simulation data and hybrid real data are used to evaluate the proposed method. Quantification results show that our proposed iterative neural network method can outperform the neural network denoising and conventional penalized maximum likelihood methods.",
"title": ""
},
{
"docid": "5b34624e72b1ed936ddca775cca329ca",
"text": "The advent of Cloud computing as a newmodel of service provisioning in distributed systems encourages researchers to investigate its benefits and drawbacks on executing scientific applications such as workflows. One of the most challenging problems in Clouds is workflow scheduling, i.e., the problem of satisfying the QoS requirements of the user as well as minimizing the cost of workflow execution. We have previously designed and analyzed a two-phase scheduling algorithm for utility Grids, called Partial Critical Paths (PCP), which aims to minimize the cost of workflow execution while meeting a userdefined deadline. However, we believe Clouds are different from utility Grids in three ways: on-demand resource provisioning, homogeneous networks, and the pay-as-you-go pricing model. In this paper, we adapt the PCP algorithm for the Cloud environment and propose two workflow scheduling algorithms: a one-phase algorithmwhich is called IaaS Cloud Partial Critical Paths (IC-PCP), and a two-phase algorithm which is called IaaS Cloud Partial Critical Paths with Deadline Distribution (IC-PCPD2). Both algorithms have a polynomial time complexity which make them suitable options for scheduling large workflows. The simulation results show that both algorithms have a promising performance, with IC-PCP performing better than IC-PCPD2 in most cases. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d86d7f10c386969e0aef2c9a5eaf2845",
"text": "E-government services require certain service levels to be achieved as they replace traditional channels. E-government also increases the dependence of government agencies on information technology based services. High quality services entail high performance, availability and scalability among other service characteristics. Strict measures are required to help e-governments evaluate the service level and assess the quality of the service. In this paper we introduce the IT Infrastructure Library (ITIL) framework - a set of best practices to achieve quality service and overcome difficulties associated with the growth of IT systems [17][21]. We conducted an in depth assessment and gap analysis for both of the service support and service delivery processes [16], in a government institution, which allowed us to assess its maturity level within the context of ITIL. We then proposed and modeled these processes in accordance to ITIL best practices and based upon agency aspirations and environment constraints.",
"title": ""
},
{
"docid": "988b56fdbfd0fbb33bb715adb173c63c",
"text": "This paper presents a new sensing system for home-based rehabilitation based on optical linear encoder (OLE), in which the motion of an optical encoder on a code strip is converted to the limb joints' goniometric data. A body sensing module was designed, integrating the OLE and an accelerometer. A sensor network of three sensing modules was established via controller area network bus to capture human arm motion. Experiments were carried out to compare the performance of the OLE module with that of commercial motion capture systems such as electrogoniometers and fiber-optic sensors. The results show that the inexpensive and simple-design OLE's performance is comparable to that of expensive systems. Moreover, a statistical study was conducted to confirm the repeatability and reliability of the sensing system. The OLE-based system has strong potential as an inexpensive tool for motion capture and arm-function evaluation for short-term as well as long-term home-based monitoring.",
"title": ""
},
{
"docid": "e44b05d7a4a2979168b876c9cdd8f573",
"text": "The network architecture of the human brain has become a feature of increasing interest to the neuroscientific community, largely because of its potential to illuminate human cognition, its variation over development and aging, and its alteration in disease or injury. Traditional tools and approaches to study this architecture have largely focused on single scales-of topology, time, and space. Expanding beyond this narrow view, we focus this review on pertinent questions and novel methodological advances for the multi-scale brain. We separate our exposition into content related to multi-scale topological structure, multi-scale temporal structure, and multi-scale spatial structure. In each case, we recount empirical evidence for such structures, survey network-based methodological approaches to reveal these structures, and outline current frontiers and open questions. Although predominantly peppered with examples from human neuroimaging, we hope that this account will offer an accessible guide to any neuroscientist aiming to measure, characterize, and understand the full richness of the brain's multiscale network structure-irrespective of species, imaging modality, or spatial resolution.",
"title": ""
},
{
"docid": "d8c40ed2d2b2970412cc8404576d0c80",
"text": "In this paper an adaptive control technique combined with the so-called IDA-PBC (Interconnexion Damping Assignment, Passivity Based Control) controller is proposed for the stabilization of a class of underactuated mechanical systems, namely, the Inertia Wheel Inverted Pendulum (IWIP). It has two degrees of freedom with one actuator. The IDA-PBC stabilizes for all initial conditions (except a set of zeros measure) the upward position of the IWIP. The efficiency of this controller depends on the tuning of several gains. Motivated by this issue we propose to automatically adapt some of these gains in order to regain performance rapidly. The effectiveness of the proposed adaptive scheme is demonstrated through numerical simulations and experimental results.",
"title": ""
},
{
"docid": "db907780a2022761d2595a8ad5d03401",
"text": "This letter is concerned with the stability analysis of neural networks (NNs) with time-varying interval delay. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay/interval-dependent stability criteria for NNs with time-varying interval delay are proposed. Numerical examples are given to demonstrate the effectiveness and the merits of the proposed method.",
"title": ""
},
{
"docid": "908637523bc68c5095bdaa78d47076e2",
"text": "In this paper, we describe the Lithium Natural Language Processing (NLP) system a resource-constrained, highthroughput and language-agnostic system for information extraction from noisy user generated text on social media. Lithium NLP extracts a rich set of information including entities, topics, hashtags and sentiment from text. We discuss several real world applications of the system currently incorporated in Lithium products. We also compare our system with existing commercial and academic NLP systems in terms of performance, information extracted and languages supported. We show that Lithium NLP is at par with and in some cases, outperforms stateof-the-art commercial NLP systems.",
"title": ""
},
{
"docid": "6e4798c01a0a241d1f3746cd98ba9421",
"text": "BACKGROUND\nLarge blood-based prospective studies can provide reliable assessment of the complex interplay of lifestyle, environmental and genetic factors as determinants of chronic disease.\n\n\nMETHODS\nThe baseline survey of the China Kadoorie Biobank took place during 2004-08 in 10 geographically defined regions, with collection of questionnaire data, physical measurements and blood samples. Subsequently, a re-survey of 25,000 randomly selected participants was done (80% responded) using the same methods as in the baseline. All participants are being followed for cause-specific mortality and morbidity, and for any hospital admission through linkages with registries and health insurance (HI) databases.\n\n\nRESULTS\nOverall, 512,891 adults aged 30-79 years were recruited, including 41% men, 56% from rural areas and mean age was 52 years. The prevalence of ever-regular smoking was 74% in men and 3% in women. The mean blood pressure was 132/79 mmHg in men and 130/77 mmHg in women. The mean body mass index (BMI) was 23.4 kg/m(2) in men and 23.8 kg/m(2) in women, with only 4% being obese (>30 kg/m(2)), and 3.2% being diabetic. Blood collection was successful in 99.98% and the mean delay from sample collection to processing was 10.6 h. For each of the main baseline variables, there is good reproducibility but large heterogeneity by age, sex and study area. By 1 January 2011, over 10,000 deaths had been recorded, with 91% of surviving participants already linked to HI databases.\n\n\nCONCLUSION\nThis established large biobank will be a rich and powerful resource for investigating genetic and non-genetic causes of many common chronic diseases in the Chinese population.",
"title": ""
},
{
"docid": "9381ba0001262dd29d7ca74a98a56fc7",
"text": "Despite several advances in information retrieval systems and user interfaces, the specification of queries over text-based document collections remains a challenging problem. Query specification with keywords is a popular solution. However, given the widespread adoption of gesture-driven interfaces such as multitouch technologies in smartphones and tablets, the lack of a physical keyboard makes query specification with keywords inconvenient. We present BinGO, a novel gestural approach to querying text databases that allows users to refine their queries using a swipe gesture to either \"like\" or \"dislike\" candidate documents as well as express the reasons they like or dislike a document by swiping through automatically generated \"reason bins\". Such reasons refine a user's query with additional keywords. We present an online and efficient bin generation algorithm that presents reason bins at gesture articulation. We motivate and describe BinGo's unique interface design choices. Based on our analysis and user studies, we demonstrate that query specification by swiping through reason bins is easy and expressive.",
"title": ""
},
{
"docid": "e261d9989f23831be7b1269755a43bf6",
"text": "Taxi services and product delivery services are instrumental for our modern society. Thanks to the emergence of sharing economy, ride-sharing services such as Uber, Didi, Lyft and Google's Waze Rider are becoming more ubiquitous and grow into an integral part of our everyday lives. However, the efficiency of these services are severely limited by the sub-optimal and imbalanced matching between the supply and demand. We need a generalized framework and corresponding efficient algorithms to address the efficient matching, and hence optimize the performance of these markets. Existing studies for taxi and delivery services are only applicable in scenarios of the one-sided market. In contrast, this work investigates a highly generalized model for the taxi and delivery services in the market economy (abbreviated as\"taxi and delivery market\") that can be widely used in two-sided markets. Further, we present efficient online and offline algorithms for different applications. We verify our algorithm with theoretical analysis and trace-driven simulations under realistic settings.",
"title": ""
},
{
"docid": "b0155714f0b0c8c24ee7d30b6fc62ace",
"text": "It has become almost routine practice to incorporate balance exercises into training programs for athletes from different sports. However, the type of training that is most efficient remains unclear, as well as the frequency, intensity and duration of the exercise that would be most beneficial have not yet been determined. The following review is based on papers that were found through computerized searches of PubMed and SportDiscus from 2000 to 2016. Articles related to balance training, testing, and injury prevention in young healthy athletes were considered. Based on a Boolean search strategy the independent researchers performed a literature review. A total of 2395 articles were evaluated, yet only 50 studies met the inclusion criteria. In most of the reviewed articles, balance training has proven to be an effective tool for the improvement of postural control. It is difficult to establish one model of training that would be appropriate for each sport discipline, including its characteristics and demands. The main aim of this review was to identify a training protocol based on most commonly used interventions that led to improvements in balance. Our choice was specifically established on the assessment of the effects of balance training on postural control and injury prevention as well as balance training methods. The analyses including papers in which training protocols demonstrated positive effects on balance performance suggest that an efficient training protocol should last for 8 weeks, with a frequency of two training sessions per week, and a single training session of 45 min. This standard was established based on 36 reviewed studies.",
"title": ""
},
{
"docid": "52b380effa3078c54ba1750305e07c06",
"text": "Advances in causal discovery from data are becoming a widespread topic in machine learning these recent years. In this paper, studies on conditional independence-based causality are briefly reviewed along a line of observable two-variable, three-variable, star decomposable, and tree decomposable, as well as their relationship to factor analysis. Then, developments along this line are further addressed from three perspectives with a number of issues, especially on learning approximate star decomposable, and tree decomposable, as well as their generalisations to block star-causality analysis on factor analysis and block tree decomposable analysis on linear causal model.",
"title": ""
},
{
"docid": "5f89aac70e93b9fcf4c37d119770f747",
"text": "Partial differential equations (PDEs) play a prominent role in many disciplines of science and engineering. PDEs are commonly derived based on empirical observations. However, with the rapid development of sensors, computational power, and data storage in the past decade, huge quantities of data can be easily collected and efficiently stored. Such vast quantity of data offers new opportunities for data-driven discovery of physical laws. Inspired by the latest development of neural network designs in deep learning, we propose a new feed-forward deep network, called PDENet, to fulfill two objectives at the same time: to accurately predict dynamics of complex systems and to uncover the underlying hidden PDE models. Comparing with existing approaches, our approach has the most flexibility by learning both differential operators and the nonlinear response function of the underlying PDE model. A special feature of the proposed PDE-Net is that all filters are properly constrained, which enables us to easily identify the governing PDE models while still maintaining the expressive and predictive power of the network. These constrains are carefully designed by fully exploiting the relation between the orders of differential operators and the orders of sum rules of filters (an important concept originated from wavelet theory). Numerical experiments show that the PDE-Net has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment. Equal contribution School of Mathematical Sciences, Peking University, Beijing, China Beijing Computational Science Research Center, Beijing, China Beijing International Center for Mathematical Research, Peking University, Beijing, China Center for Data Science, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research. Correspondence to: Bin Dong <dongbin@math.pku.edu.cn>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "38ce75d302995364c5706cd74da35a03",
"text": "In higher educational institutes, many students have to struggle hard to complete different courses since there is no dedicated support offered to students who need special attention in the registered courses. Machine learning techniques can be utilized for students’ grades prediction in different courses. Such techniques would help students to improve their performance based on predicted grades and would enable instructors to identify such individuals who might need assistance in the courses. In this paper, we use Collaborative Filtering (CF), Matrix Factorization (MF), and Restricted Boltzmann Machines (RBM) techniques to systematically analyze a real-world data collected from Information Technology University (ITU), Lahore, Pakistan. We evaluate the academic performance of ITU students who got admission in the bachelor’s degree program in ITU’s Electrical Engineering department. The RBM technique is found to be better than the other techniques used in predicting the students’ performance in the particular course.",
"title": ""
},
{
"docid": "2ae69330b32aa485876e26ecc78ca66d",
"text": "One of the promising usages of Physically Unclonable Functions (PUFs) is to generate cryptographic keys from PUFs for secure storage of key material. This usage has attractive properties such as physical unclonability and enhanced resistance against hardware attacks. In order to extract a reliable cryptographic key from a noisy PUF response a fuzzy extractor is used to convert non-uniform random PUF responses into nearly uniform randomness. Bösch et al. in 2008 proposed a fuzzy extractor suitable for efficient hardware implementation using two-stage concatenated codes, where the inner stage is a conventional error correcting code and the outer stage is a repetition code. In this paper we show that the combination of PUFs with repetition code approaches is not without risk and must be approached carefully. For example, PUFs with min-entropy lower than 66% may yield zero leftover entropy in the generated key for some repetition code configurations. In addition, we find that many of the fuzzy extractor designs in the literature are too optimistic with respect to entropy estimation. For high security applications, we recommend a conservative estimation of entropy loss based on the theoretical work of fuzzy extractors and present parameters for generating 128-bit keys from memory based PUFs.",
"title": ""
}
] | scidocsrr |
70f0364b6dd31fd832d9f8e323819f73 | RFID tags: Positioning principles and localization techniques | [
{
"docid": "259c17740acd554463731d3e1e2912eb",
"text": "In recent years, radio frequency identification technology has moved from obscurity into mainstream applications that help speed the handling of manufactured goods and materials. RFID enables identification from a distance, and unlike earlier bar-code technology, it does so without requiring a line of sight. In this paper, the author introduces the principles of RFID, discusses its primary technologies and applications, and reviews the challenges organizations will face in deploying this technology.",
"title": ""
}
] | [
{
"docid": "ccabbaf3caded63d94c77562a47a978f",
"text": "Modern deep artificial neural networks have achieved impressive results through models with very large capacity—compared to the number of training examples— that control overfitting with the help of different forms of regularization. Regularization can be implicit, as is the case of stochastic gradient descent and parameter sharing in convolutional layers, or explicit. Most common explicit regularization techniques, such as weight decay and dropout, reduce the effective capacity of the model and typically require the use of deeper and wider architectures to compensate for the reduced capacity. Although these techniques have been proven successful in terms of improved generalization, they seem to waste capacity. In contrast, data augmentation techniques do not reduce the effective capacity and improve generalization by increasing the number of training examples. In this paper we systematically analyze the effect of data augmentation on some popular architectures and conclude that data augmentation alone—without any other explicit regularization techniques—can achieve the same performance or higher as regularized models, especially when training with fewer examples, and exhibits much higher adaptability to changes in the architecture.",
"title": ""
},
{
"docid": "89469027347d0118f2ba576d7b372ae7",
"text": "We are given a large population database that contains information about population instances. The population is known to comprise of m groups, but the population instances are not labeled with the group identi cation. Also given is a population sample (much smaller than the population but representative of it) in which the group labels of the instances are known. We present an interval classi er (IC) which generates a classi cation function for each group that can be used to e ciently retrieve all instances of the specied group from the population database. To allow IC to be embedded in interactive loops to answer adhoc queries about attributes with missing values, IC has been designed to be e cient in the generation of classi cation functions. Preliminary experimental results indicate that IC not only has retrieval and classi er generation e ciency advantages, but also compares favorably in the classi cation accuracy with current tree classi ers, such as ID3, which were primarily designed for minimizing classi cation errors. We also describe some new applications that arise from encapsulating the classi cation capability in database systems and discuss extensions to IC for it to be used in these new application domains. Current address: Computer Science Department, Rutgers University, NJ 08903 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 18th VLDB Conference Vancouver, British Columbia, Canada 1992",
"title": ""
},
{
"docid": "c256283819014d79dd496a3183116b68",
"text": "For the 5th generation of terrestrial mobile communications, Multi-Carrier (MC) transmission based on non-orthogonal waveforms is a promising technology component compared to orthogonal frequency division multiplex (OFDM) in order to achieve higher throughput and enable flexible spectrum management. Coverage extension and service continuity can be provided considering satellites as additional components in future networks by allowing vertical handover to terrestrial radio interfaces. In this paper, the properties of Filter Bank Multicarrier (FBMC) as potential MC transmission scheme is discussed taking into account the requirements for the satellite-specific PHY-Layer like non-linear distortions due to High Power Amplifiers (HPAs). The performance for specific FBMC configurations is analyzed in terms of peak-to-average power ratio (PAPR), computational complexity, non-linear distortions as well as carrier frequency offsets sensitivity (CFOs). Even though FBMC and OFDM have similar PAPR and suffer comparable spectral regrowth at the output of the non linear amplifier, simulations on link level show that FBMC still outperforms OFDM in terms of CFO sensitivity and symbol error rate in the presence of non-linear distortions.",
"title": ""
},
{
"docid": "0d6d8578b41d736a6df373c08e3e1f95",
"text": "We provide a Matlab package p1afem for an adaptive P1-finite element method (AFEM). This includes functions for the assembly of the data, different error estimators, and an indicator-based adaptive mesh-refining algorithm. Throughout, the focus is on an efficient realization by use of Matlab built-in functions and vectorization. Numerical experiments underline the efficiency of the code which is observed to be of almost linear complexity with respect to the runtime. Although the scope of this paper is on AFEM, the general ideas can be understood as a guideline for writing efficient Matlab code.",
"title": ""
},
{
"docid": "9e4adad2e248895d80f28cf6134f68c1",
"text": "Maltodextrin (MX) is an ingredient in high demand in the food industry, mainly for its useful physical properties which depend on the dextrose equivalent (DE). The DE has however been shown to be an inaccurate parameter for predicting the performance of the MXs in technological applications, hence commercial MXs were characterized by mass spectrometry (MS) to determine their molecular weight distribution (MWD) and degree of polymerization (DP). Samples were subjected to different water activities (aw). Water adsorption was similar at low aw, but radically increased with the DP at higher aw. The decomposition temperature (Td) showed some variations attributed to the thermal hydrolysis induced by the large amount of adsorbed water and the supplied heat. The glass transition temperature (Tg) linearly decreased with both, aw and DP. The microstructural analysis by X-ray diffraction showed that MXs did not crystallize with the adsorption of water, preserving their amorphous structure. The optical micrographs showed radical changes in the overall appearance of the MXs, indicating a transition from a glassy to a rubbery state. Based on these characterizations, different technological applications for the MXs were suggested.",
"title": ""
},
{
"docid": "fc4e32d6bafbc3cf18802f0af12e3092",
"text": "Self-report instruments commonly used to assess depression in adolescents have limited or unknown reliability and validity in this age group. We describe a new self-report scale, the Kutcher Adolescent Depression Scale (KADS), designed specifically to diagnose and assess the severity of adolescent depression. This report compares the diagnostic validity of the full 16-item instrument, brief versions of it, and the Beck Depression Inventory (BDI) against the criteria for major depressive episode (MDE) from the Mini International Neuropsychiatric Interview (MINI). Some 309 of 1,712 grade 7 to grade 12 students who completed the BDI had scores that exceeded 15. All were invited for further assessment, of whom 161 agreed to assessment by the KADS, the BDI again, and a MINI diagnostic interview for MDE. Receiver operating characteristic (ROC) curve analysis was used to determine which KADS items best identified subjects experiencing an MDE. Further ROC curve analyses established that the overall diagnostic ability of a six-item subscale of the KADS was at least as good as that of the BDI and was better than that of the full-length KADS. Used with a cutoff score of 6, the six-item KADS achieved sensitivity and specificity rates of 92% and 71%, respectively-a combination not achieved by other self-report instruments. The six-item KADS may prove to be an efficient and effective means of ruling out MDE in adolescents.",
"title": ""
},
{
"docid": "5e7b935a73180c9ccad3bc0e82311503",
"text": "What happens if one pushes a cup sitting on a table toward the edge of the table? How about pushing a desk against a wall? In this paper, we study the problem of understanding the movements of objects as a result of applying external forces to them. For a given force vector applied to a specific location in an image, our goal is to predict long-term sequential movements caused by that force. Doing so entails reasoning about scene geometry, objects, their attributes, and the physical rules that govern the movements of objects. We design a deep neural network model that learns long-term sequential dependencies of object movements while taking into account the geometry and appearance of the scene by combining Convolutional and Recurrent Neural Networks. Training our model requires a large-scale dataset of object movements caused by external forces. To build a dataset of forces in scenes, we reconstructed all images in SUN RGB-D dataset in a physics simulator to estimate the physical movements of objects caused by external forces applied to them. Our Forces in Scenes (ForScene) dataset contains 10,335 images in which a variety of external forces are applied to different types of objects resulting in more than 65,000 object movements represented in 3D. Our experimental evaluations show that the challenging task of predicting longterm movements of objects as their reaction to external forces is possible from a single image.",
"title": ""
},
{
"docid": "87748bcc07ab498218233645bdd4dd0c",
"text": "This paper proposes a method of recognizing and classifying the basic activities such as forward and backward motions by applying a deep learning framework on passive radio frequency (RF) signals. The echoes from the moving body possess unique pattern which can be used to recognize and classify the activity. A passive RF sensing test- bed is set up with two channels where the first one is the reference channel providing the un- altered echoes of the transmitter signals and the other one is the surveillance channel providing the echoes of the transmitter signals reflecting from the moving body in the area of interest. The echoes of the transmitter signals are eliminated from the surveillance signals by performing adaptive filtering. The resultant time series signal is classified into different motions as predicted by proposed novel method of convolutional neural network (CNN). Extensive amount of training data has been collected to train the model, which serves as a reference benchmark for the later studies in this field.",
"title": ""
},
{
"docid": "84f3e4354af23ece035ee604507eec71",
"text": "Speech is not recognized with an accuracy of 100%. Even humans are not able to do that. There will always be some uncertainty in the recognized input, requiring strategies to cope. This is different from the experience with graphical user interfaces, where keyboard and mouse input are recognized without any doubts. Speech recognition and other errors occur frequently and reduce both the usefulness of applications and user satisfaction. This turns error handling into a crucial aspect of speech applications. Successful error handling methods can make even applications with poor recognition accuracy more successful. In [Sagawa et al. 2004] the authors show that the task completion rate increased from 86.4% to 93.4% and the average number of turns reduced by three after a better error handling method had been installed. On the other hand, poorly constructed error handling may bring unwanted complexity to the system and cause new errors and annoyances.",
"title": ""
},
{
"docid": "281e8785214bb209a142d420dfdc5f26",
"text": "This study examined achievement when podcasts were used in place of lecture in the core technology course required for all students seeking teacher licensure at a large research-intensive university in the Southeastern United States. Further, it examined the listening preferences of the podcast group and the barriers to podcast use. The results revealed that there was no significant difference in the achievement of preservice teachers who experienced podcast instruction versus those who received lecture instruction. Further, there was no significant difference in their study habits. Participants preferred to use a computer and Blackboard for downloading the podcasts, which they primarily listened to at home. They tended to like the podcasts as well as the length of the podcasts and felt that they were reasonably effective for learning. They agreed that the podcasts were easy to use but disagreed that they should be used to replace lecture. Barriers to podcast use include unfamiliarity with podcasts, technical problems in accessing and downloading podcasts, and not seeing the relevance of podcasts to their learning. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cf90703045e958c48282d758f84f2568",
"text": "One expectation about the future Internet is the participation of billions of sensor nodes, integrating the physical with the digital world. This Internet of Things can offer new and enhanced services and applications based on knowledge about the environment and the entities within. Millions of micro-providers could come into existence, forming a highly fragmented market place with new business opportunities to offer commercial services. In the related field of Internet and Telecommunication services, the design of markets and pricing schemes has been a vital research area in itself. We discuss how these findings can be transferred to the Internet of Things. Both the appropriate market structure and corresponding pricing schemes need to be well understood to enable a commercial success of sensor-based services. We show some steps that an evolutionary establishment of this market might have to take.",
"title": ""
},
{
"docid": "6922a913c6ede96d5062f055b55377e7",
"text": "This paper presents the issue of a nonharmonic multitone generation with the use of singing bowls and the digital signal processors. The authors show the possibility of therapeutic applications of such multitone signals. Some known methods of the digital generation of the tone signal with the additional modulation are evaluated. Two projects of the very precise multitone generators are presented. In described generators, the digital signal processors synthesize the signal, while the additional microcontrollers realize the operator's interface. As a final result, the sound of the original singing bowls is confronted with the sound synthesized by one of the generators.",
"title": ""
},
{
"docid": "9cc5fddebc5c45c4c7f5535136275076",
"text": "This paper details the winning method in the IEEE GOLD category of the PHM psila08 Data Challenge. The task was to estimate the remaining useable life left of an unspecified complex system using a purely data driven approach. The method involves the construction of Multi-Layer Perceptron and Radial Basis Function networks for regression. A suitable selection of these networks has been successfully combined in an ensemble using a Kalman filter. The Kalman filter provides a mechanism for fusing multiple neural network model predictions over time. The essential initial stages of pre-processing and data exploration are also discussed.",
"title": ""
},
{
"docid": "785cb08c500aea1ead360138430ba018",
"text": "A recent “third wave” of neural network (NN) approaches now delivers state-of-the-art performance in many machine learning tasks, spanning speech recognition, computer vision, and natural language processing. Because these modern NNs often comprise multiple interconnected layers, work in this area is often referred to as deep learning. Recent years have witnessed an explosive growth of research into NN-based approaches to information retrieval (IR). A significant body of work has now been created. In this paper, we survey the current landscape of Neural IR research, paying special attention to the use of learned distributed representations of textual units. We highlight the successes of neural IR thus far, catalog obstacles to its wider adoption, and suggest potentially promising directions for future research.",
"title": ""
},
{
"docid": "2f208da3cc0dab71e82bf2f83f0d5639",
"text": "Automatic music type classification is very helpful for the management of digital music database. In this paper, Octavebased Spectral Contrast feature is proposed to represent the spectral characteristics of a music clip. It represented the relative spectral distribution instead of average spectral envelope. Experiments showed that Octave-based Spectral Contrast feature performed well in music type classification. Another comparison experiment demonstrated that Octave-based Spectral Contrast feature has a better discrimination among different music types than Mel-Frequency Cepstral Coefficients (MFCC), which is often used in previous music type classification systems.",
"title": ""
},
{
"docid": "d8be338cbe411c79905f108fbbe55814",
"text": "Head-up displays (HUD) permit augmented reality (AR) information in cars. Simulation is a convenient way to design and evaluate the benefit of such innovation for the driver. For this purpose, we have developed a virtual HUD that we compare to real AR HUDs from depth perception features. User testing was conducted with 24 participants in a stereoscopic driving simulator. It showed the ability of the virtual HUD to reproduce the perception of the distance between real objects and their augmentation. Three AR overlay designs to highlight the car ahead were compared: the trapezoid shape was perceived as more congruent that the U shape overlay.",
"title": ""
},
{
"docid": "2f1e059a0c178b3703c31ad31761dadc",
"text": "This paper will serve as an introduction to the body of work on robust subspace recovery. Robust subspace recovery involves finding an underlying low-dimensional subspace in a data set that is possibly corrupted with outliers. While this problem is easy to state, it has been difficult to develop optimal algorithms due to its underlying nonconvexity. This work emphasizes advantages and disadvantages of proposed approaches and unsolved problems in the area.",
"title": ""
},
{
"docid": "de8633682653e9f979ec7a9177e461b4",
"text": "The increasingly widespread use of social network sites to expand and deepen one’s social connections is a relatively new but potentially important phenomenon that has implications for teaching and learning and teacher education in the 21st century. This paper surveys the educational research literature to examine: How such technologies are perceived and used by K-12 learners and teachers with what impacts on pedagogy or students' learning. Selected studies were summarized and categorized according to the four types introduced by Roblyer (2005) as studies most needed to move the educational technology field forward. These include studies that establish the technology’s effectiveness at improving student learning; investigate implementation strategies; monitor social impact; and report on common uses to shape the direction of the field. We found the most prevalent type of study conducted related to our focal topic was research on common uses. The least common type of study conducted was research that established the technology’s effectiveness at improving student learning. Implications for the design of future research and teacher education initiatives are discussed.",
"title": ""
},
{
"docid": "86ad395a553495de5f297a2b5fde3f0e",
"text": "⇒ NOT written, but spoken language. [Intuitions come from written.] ⇒ NOT meaning as thing, but use of linguistic forms for communicative functions o Direct att. in shared conceptual space like gestures (but w/conventions) ⇒ NOT grammatical rules, but patterns of use => schemas o Constructions themselves as complex symbols \"She sneezed him the ball\" o NOT 'a grammar' but a structured inventory of constructions: continuum of regularity => idiomaticity grammaticality = normativity • Many complexities = \"unification\" of constructions w/ incompatibilities o NOT innate UG, but \"teeming modularity\" (1) symbols, pred-arg structure,",
"title": ""
},
{
"docid": "18f530c400498658d73aba21f0ce984e",
"text": "Anomaly and event detection has been studied widely for having many applications in fraud detection, network intrusion detection, detection of epidemic outbreaks, and so on. In this paper we propose an algorithm that operates on a time-varying network of agents with edges representing interactions between them and (1) spots \"anomalous\" points in time at which many agents \"change\" their behavior in a way it deviates from the norm; and (2) attributes the detected anomaly to those agents that contribute to the \"change\" the most. Experiments on a large mobile phone network (of 2 million anonymous customers with 50 million interactions over a period of 6 months) shows that the \"change\"-points detected by our algorithm coincide with the social events and the festivals in our data.",
"title": ""
}
] | scidocsrr |
8faaeab9c3ab915adb0ce9a47c6b4b1c | FastRunner: A fast, efficient and robust bipedal robot. Concept and planar simulation | [
{
"docid": "2997be0d8b1f7a183e006eba78135b13",
"text": "The basic mechanics of human locomotion are associated with vaulting over stiff legs in walking and rebounding on compliant legs in running. However, while rebounding legs well explain the stance dynamics of running, stiff legs cannot reproduce that of walking. With a simple bipedal spring-mass model, we show that not stiff but compliant legs are essential to obtain the basic walking mechanics; incorporating the double support as an essential part of the walking motion, the model reproduces the characteristic stance dynamics that result in the observed small vertical oscillation of the body and the observed out-of-phase changes in forward kinetic and gravitational potential energies. Exploring the parameter space of this model, we further show that it not only combines the basic dynamics of walking and running in one mechanical system, but also reveals these gaits to be just two out of the many solutions to legged locomotion offered by compliant leg behaviour and accessed by energy or speed.",
"title": ""
}
] | [
{
"docid": "3d310295592775bbe785692d23649c56",
"text": "BACKGROUND\nEvidence indicates that sexual assertiveness is one of the important factors affecting sexual satisfaction. According to some studies, traditional gender norms conflict with women's capability in expressing sexual desires. This study examined the relationship between gender roles and sexual assertiveness in married women in Mashhad, Iran.\n\n\nMETHODS\nThis cross-sectional study was conducted on 120 women who referred to Mashhad health centers through convenient sampling in 2014-15. Data were collected using Bem Sex Role Inventory (BSRI) and Hulbert index of sexual assertiveness. Data were analyzed using SPSS 16 by Pearson and Spearman's correlation tests and linear Regression Analysis.\n\n\nRESULTS\nThe mean scores of sexual assertiveness was 54.93±13.20. According to the findings, there was non-significant correlation between Femininity and masculinity score with sexual assertiveness (P=0.069 and P=0.080 respectively). Linear regression analysis indicated that among the predictor variables, only Sexual function satisfaction was identified as the sexual assertiveness summary predictor variables (P=0.001).\n\n\nCONCLUSION\nBased on the results, sexual assertiveness in married women does not comply with gender role, but it is related to Sexual function satisfaction. So, counseling psychologists need to consider this variable when designing intervention programs for modifying sexual assertiveness and find other variables that affect sexual assertiveness.",
"title": ""
},
{
"docid": "c804a0b91f79bc80b5156e182a628650",
"text": "Software as a Service (SaaS) is an online software delivery model which permits a third party provider offering software services to be used on-demand by tenants over the internet, instead of installing and maintaining them in their premises. Nowadays, more and more companies are offering their web-base business application by adopting this model. Multi-tenancy is the primary characteristic of SaaS, it allows SaaS vendors to run a single instance application which supports multiple tenants on the same hardware and software infrastructure. This application should be highly customizable to meet tenants' expectations and business requirements. In this paper, we propose a novel customizable database design for multi-tenant applications. Our design introduces an Elastic Extension Tables (EET) which consists of Common Tenant Tables (CTT) and Virtual Extension Tables (VET). This design enables tenants to create their own elastic database schema during multi-tenant application run-time execution to satisfy their business needs.",
"title": ""
},
{
"docid": "93e6194dc3d8922edb672ac12333ea82",
"text": "Sensors including RFID tags have been widely deployed for measuring environmental parameters such as temperature, humidity, oxygen concentration, monitoring the location and velocity of moving objects, tracking tagged objects, and many others. To support effective, efficient, and near real-time phenomena probing and objects monitoring, streaming sensor data have to be gracefully managed in an event processing manner. Different from the traditional events, sensor events come with temporal or spatio-temporal constraints and can be non-spontaneous. Meanwhile, like general event streams, sensor event streams can be generated with very high volumes and rates. Primitive sensor events need to be filtered, aggregated and correlated to generate more semantically rich complex events to facilitate the requirements of up-streaming applications. Motivated by such challenges, many new methods have been proposed in the past to support event processing in sensor event streams. In this chapter, we survey state-of-the-art research on event processing in sensor networks, and provide a broad overview of major topics in Springer Science+Business Media New York 2013 © Managing and Mining Sensor Data, DOI 10.1007/978-1-4614-6309-2_4, C.C. Aggarwal (ed.), 77 78 MANAGING AND MINING SENSOR DATA complex RFID event processing, including event specification languages, event detection models, event processing methods and their optimizations. Additionally, we have presented an open discussion on advanced issues such as processing uncertain and out-of-order sensor events.",
"title": ""
},
{
"docid": "4f43c8ba81a8b828f225923690e9f7dd",
"text": "Melody extraction algorithms aim to produce a sequence of frequency values corresponding to the pitch of the dominant melody from a musical recording. Over the past decade, melody extraction has emerged as an active research topic, comprising a large variety of proposed algorithms spanning a wide range of techniques. This article provides an overview of these techniques, the applications for which melody extraction is useful, and the challenges that remain. We start with a discussion of ?melody? from both musical and signal processing perspectives and provide a case study that interprets the output of a melody extraction algorithm for specific excerpts. We then provide a comprehensive comparative analysis of melody extraction algorithms based on the results of an international evaluation campaign. We discuss issues of algorithm design, evaluation, and applications that build upon melody extraction. Finally, we discuss some of the remaining challenges in melody extraction research in terms of algorithmic performance, development, and evaluation methodology.",
"title": ""
},
{
"docid": "0e4722012aeed8dc356aa8c49da8c74f",
"text": "The Android software stack for mobile devices defines and enforces its own security model for apps through its application-layer permissions model. However, at its foundation, Android relies upon the Linux kernel to protect the system from malicious or flawed apps and to isolate apps from one another. At present, Android leverages Linux discretionary access control (DAC) to enforce these guarantees, despite the known shortcomings of DAC. In this paper, we motivate and describe our work to bring flexible mandatory access control (MAC) to Android by enabling the effective use of Security Enhanced Linux (SELinux) for kernel-level MAC and by developing a set of middleware MAC extensions to the Android permissions model. We then demonstrate the benefits of our security enhancements for Android through a detailed analysis of how they mitigate a number of previously published exploits and vulnerabilities for Android. Finally, we evaluate the overheads imposed by our security enhancements.",
"title": ""
},
{
"docid": "69d9bfd0ba72724e560f499a4807d7e7",
"text": "Is it possible to recover an image from its noisy version using convolutional neural networks? This is an interesting problem as convolutional layers are generally used as feature detectors for tasks like classification, segmentation and object detection. We present a new CNN architecture for blind image denoising which synergically combines three architecture components, a multi-scale feature extraction layer which helps in reducing the effect of noise on feature maps, an ℓp regularizer which helps in selecting only the appropriate feature maps for the task of reconstruction, and finally a three step training approach which leverages adversarial training to give the final performance boost to the model. The proposed model shows competitive denoising performance when compared to the state-of-the-art approaches.",
"title": ""
},
{
"docid": "ef6678881f503c1cec330ddde3e30929",
"text": "Complex queries over high speed data streams often need to rely on approximations to keep up with their input. The research community has developed a rich literature on approximate streaming algorithms for this application. Many of these algorithms produce samples of the input stream, providing better properties than conventional random sampling. In this paper, we abstract the stream sampling process and design a new stream sample operator. We show how it can be used to implement a wide variety of algorithms that perform sampling and sampling-based aggregations. Also, we show how to implement the operator in Gigascope - a high speed stream database specialized for IP network monitoring applications. As an example study, we apply the operator within such an enhanced Gigascope to perform subset-sum sampling which is of great interest for IP network management. We evaluate this implemention on a live, high speed internet traffic data stream and find that (a) the operator is a flexible, versatile addition to Gigascope suitable for tuning and algorithm engineering, and (b) the operator imposes only a small evaluation overhead. This is the first operational implementation we know of, for a wide variety of stream sampling algorithms at line speed within a data stream management system.",
"title": ""
},
{
"docid": "48ba3cad9e20162b6dcbb28ead47d997",
"text": "This paper compares the accuracy of several variations of the B LEU algorithm when applied to automatically evaluating student essays. The different configurations include closed-class word removal, stemming, two baseline wordsense disambiguation procedures, and translating the texts into a simple semantic representation. We also prove empirically that the accuracy is kept when the student answers are translated automatically. Although none of the representations clearly outperform the others, some conclusions are drawn from the results.",
"title": ""
},
{
"docid": "cf419597981ba159ac3c1e85af683871",
"text": "Energy is a vital input for social and economic development. As a result of the generalization of agricultural, industrial and domestic activities the demand for energy has increased remarkably, especially in emergent countries. This has meant rapid grower in the level of greenhouse gas emissions and the increase in fuel prices, which are the main driving forces behind efforts to utilize renewable energy sources more effectively, i.e. energy which comes from natural resources and is also naturally replenished. Despite the obvious advantages of renewable energy, it presents important drawbacks, such as the discontinuity of ulti-criteria decision analysis",
"title": ""
},
{
"docid": "a9e26514ffc78c1018e00c63296b9584",
"text": "When labeled examples are limited and difficult to obtain, transfer learning employs knowledge from a source domain to improve learning accuracy in the target domain. However, the assumption made by existing approaches, that the marginal and conditional probabilities are directly related between source and target domains, has limited applicability in either the original space or its linear transformations. To solve this problem, we propose an adaptive kernel approach that maps the marginal distribution of target-domain and source-domain data into a common kernel space, and utilize a sample selection strategy to draw conditional probabilities between the two domains closer. We formally show that under the kernel-mapping space, the difference in distributions between the two domains is bounded; and the prediction error of the proposed approach can also be bounded. Experimental results demonstrate that the proposed method outperforms both traditional inductive classifiers and the state-of-the-art boosting-based transfer algorithms on most domains, including text categorization and web page ratings. In particular, it can achieve around 10% higher accuracy than other approaches for the text categorization problem. The source code and datasets are available from the authors.",
"title": ""
},
{
"docid": "e6dd43c6e5143c519b40ab423b403193",
"text": "Tables and forms are a very common way to organize information in structured documents. Their recognition is fundamental for the recognition of the documents. Indeed, the physical organization of a table or a form gives a lot of information concerning the logical meaning of the content. This chapter presents the different tasks that are related to the recognition of tables and forms and the associated well-known methods and remaining B. Coüasnon ( ) IRISA/INSA de Rennes, Rennes Cedex, France e-mail: couasnon@irisa.fr A. Lemaitre IRISA/Université Rennes 2, Rennes Cedex, France e-mail:couasnon@irisa.fr D. Doermann, K. Tombre (eds.), Handbook of Document Image Processing and Recognition, DOI 10.1007/978-0-85729-859-1 20, © Springer-Verlag London 2014 647 648 B. Coüasnon and A. Lemaitre challenges. Three main tasks are pointed out: the detection of tables in heterogeneous documents; the classification of tables and forms, according to predefined models; and the recognition of table and form contents. The complexity of these three tasks is related to the kind of studied document: image-based document or digital-born documents. At last, this chapter will introduce some existing systems for table and form analysis.",
"title": ""
},
{
"docid": "51b8fe57500d1d74834d1f9faa315790",
"text": "Simulations of smoke are pervasive in the production of visual effects for commercials, movies and games: from cigarette smoke and subtle dust to large-scale clouds of soot and vapor emanating from fires and explosions. In this talk we present a new Eulerian method that targets the simulation of such phenomena on a structured spatially adaptive voxel grid --- thereby achieving an improvement in memory usage and computational performance over regular dense and sparse grids at uniform resolution. Contrary to e.g. Setaluri et al. [2014], we use velocities collocated at voxel corners which allows sharper interpolation for spatially adaptive simulations, is faster for sampling, and promotes ease-of-use in an open procedural environment where technical artists often construct small computational graphs that apply forces, dissipation etc. to the velocities. The collocated method requires special treatment when projecting out the divergent velocity modes to prevent non-physical high frequency oscillations (not addressed by Ferstl et al. [2014]). To this end we explored discretization and filtering methods from computational physics, combining them with a matrix-free adaptive multigrid scheme based on MLAT and FAS [Trottenberg and Schuller 2001]. Finally we contribute a new volumetric quadrature approach to temporally smooth emission which outperforms e.g. Gaussian quadrature at large time steps. We have implemented our method in the cross-platform Autodesk Bifrost procedural environment which facilitates customization by the individual technical artist, and our implementation is in production use at several major studios. We refer the reader to the accompanying video for examples that illustrate our novel workflows for spatially adaptive simulations and the benefits of our approach. We note that several methods for adaptive fluid simulation have been proposed in recent years, e.g. [Ferstl et al. 2014; Setaluri et al. 2014], and we have drawn a lot of inspiration from these. However, to the best of our knowledge we are the first in computer graphics to propose a collocated velocity, spatially adaptive and matrix-free smoke simulation method that explicitly mitigates non-physical divergent modes.",
"title": ""
},
{
"docid": "69179341377477af8ebe9013c664828c",
"text": "1. Intensive agricultural practices drive biodiversity loss with potentially drastic consequences for ecosystem services. To advance conservation and production goals, agricultural practices should be compatible with biodiversity. Traditional or less intensive systems (i.e. with fewer agrochemicals, less mechanisation, more crop species) such as shaded coffee and cacao agroforests are highlighted for their ability to provide a refuge for biodiversity and may also enhance certain ecosystem functions (i.e. predation). 2. Ants are an important predator group in tropical agroforestry systems. Generally, ant biodiversity declines with coffee and cacao intensification yet the literature lacks a summary of the known mechanisms for ant declines and how this diversity loss may affect the role of ants as predators. 3. Here, how shaded coffee and cacao agroforestry systems protect biodiversity and may preserve related ecosystem functions is discussed in the context of ants as predators. Specifically, the relationships between biodiversity and predation, links between agriculture and conservation, patterns and mechanisms for ant diversity loss with agricultural intensification, importance of ants as control agents of pests and fungal diseases, and whether ant diversity may influence the functional role of ants as predators are addressed. Furthermore, because of the importance of homopteran-tending by ants in the ecological and agricultural literature, as well as to the success of ants as predators, the costs and benefits of promoting ants in agroforests are discussed. 4. Especially where the diversity of ants and other predators is high, as in traditional agroforestry systems, both agroecosystem function and conservation goals will be advanced by biodiversity protection.",
"title": ""
},
{
"docid": "5eddede4043c78a41eb59a938da6e26b",
"text": "In Named-Data Networking (NDN), content is cached in network nodes and served for future requests. This property of NDN allows attackers to inject poisoned content into the network and isolate users from valid content sources. Since a digital signature is embedded in every piece of content in NDN architecture, poisoned content is discarded if routers perform signature verification; however, if every content is verified by every router, it would be overly expensive to do. In our preliminary work, we have suggested a content verification scheme that minimizes unnecessary verification and favors already verified content in the content store, which reduces the verification overhead by as much as 90% without failing to detect every piece of poisoned content. Under this scheme, however, routers are vulnerable to verification attack, in which a large amount of unverified content is accessed to exhaust system resources. In this paper, we carefully look at the possible concerns of our preliminary work, including verification attack, and present a simple but effective solution. The proposed solution mitigates the weakness of our preliminary work and allows this paper to be deployed for real-world applications.",
"title": ""
},
{
"docid": "355fca41993ea19b08d2a9fc19e25722",
"text": "People and companies selling goods or providing services have always desired to know what people think about their products. The number of opinions on the Web has significantly increased with the emergence of microblogs. In this paper we present a novel method for sentiment analysis of a text that allows the recognition of opinions in microblogs which are connected to a particular target or an entity. This method differs from other approaches in utilizing appraisal theory, which we employ for the analysis of microblog posts. The results of the experiments we performed on Twitter showed that our method improves sentiment classification and is feasible even for such specific content as presented on microblogs.",
"title": ""
},
{
"docid": "7a0ed38af9775a77761d6c089db48188",
"text": "We introduce polyglot language models, recurrent neural network models trained to predict symbol sequences in many different languages using shared representations of symbols and conditioning on typological information about the language to be predicted. We apply these to the problem of modeling phone sequences—a domain in which universal symbol inventories and cross-linguistically shared feature representations are a natural fit. Intrinsic evaluation on held-out perplexity, qualitative analysis of the learned representations, and extrinsic evaluation in two downstream applications that make use of phonetic features show (i) that polyglot models better generalize to held-out data than comparable monolingual models and (ii) that polyglot phonetic feature representations are of higher quality than those learned monolingually.",
"title": ""
},
{
"docid": "430c4f8912557f4286d152608ce5eab8",
"text": "The latex of the tropical species Carica papaya is well known for being a rich source of the four cysteine endopeptidases papain, chymopapain, glycyl endopeptidase and caricain. Altogether, these enzymes are present in the laticifers at a concentration higher than 1 mM. The proteinases are synthesized as inactive precursors that convert into mature enzymes within 2 min after wounding the plant when the latex is abruptly expelled. Papaya latex also contains other enzymes as minor constituents. Several of these enzymes namely a class-II and a class-III chitinase, an inhibitor of serine proteinases and a glutaminyl cyclotransferase have already been purified up to apparent homogeneity and characterized. The presence of a beta-1,3-glucanase and of a cystatin is also suspected but they have not yet been isolated. Purification of these papaya enzymes calls on the use of ion-exchange supports (such as SP-Sepharose Fast Flow) and hydrophobic supports [such as Fractogel TSK Butyl 650(M), Fractogel EMD Propyl 650(S) or Thiophilic gels]. The use of covalent or affinity gels is recommended to provide preparations of cysteine endopeptidases with a high free thiol content (ideally 1 mol of essential free thiol function per mol of enzyme). The selective grafting of activated methoxypoly(ethylene glycol) chains (with M(r) of 5000) on the free thiol functions of the proteinases provides an interesting alternative to the use of covalent and affinity chromatographies especially in the case of enzymes such as chymopapain that contains, in its native state, two thiol functions.",
"title": ""
},
{
"docid": "72e0824602462a21781e9a881041e726",
"text": "In an effort to develop a genomics-based approach to the prediction of drug response, we have developed an algorithm for classification of cell line chemosensitivity based on gene expression profiles alone. Using oligonucleotide microarrays, the expression levels of 6,817 genes were measured in a panel of 60 human cancer cell lines (the NCI-60) for which the chemosensitivity profiles of thousands of chemical compounds have been determined. We sought to determine whether the gene expression signatures of untreated cells were sufficient for the prediction of chemosensitivity. Gene expression-based classifiers of sensitivity or resistance for 232 compounds were generated and then evaluated on independent sets of data. The classifiers were designed to be independent of the cells' tissue of origin. The accuracy of chemosensitivity prediction was considerably better than would be expected by chance. Eighty-eight of 232 expression-based classifiers performed accurately (with P < 0.05) on an independent test set, whereas only 12 of the 232 would be expected to do so by chance. These results suggest that at least for a subset of compounds genomic approaches to chemosensitivity prediction are feasible.",
"title": ""
},
{
"docid": "ec1da767db4247990c26f97483f1b9e1",
"text": "We survey foundational features underlying modern graph query languages. We first discuss two popular graph data models: edge-labelled graphs, where nodes are connected by directed, labelled edges, and property graphs, where nodes and edges can further have attributes. Next we discuss the two most fundamental graph querying functionalities: graph patterns and navigational expressions. We start with graph patterns, in which a graph-structured query is matched against the data. Thereafter, we discuss navigational expressions, in which patterns can be matched recursively against the graph to navigate paths of arbitrary length; we give an overview of what kinds of expressions have been proposed and how they can be combined with graph patterns. We also discuss several semantics under which queries using the previous features can be evaluated, what effects the selection of features and semantics has on complexity, and offer examples of such features in three modern languages that are used to query graphs: SPARQL, Cypher, and Gremlin. We conclude by discussing the importance of formalisation for graph query languages; a summary of what is known about SPARQL, Cypher, and Gremlin in terms of expressivity and complexity; and an outline of possible future directions for the area.",
"title": ""
},
{
"docid": "08d0c860298b03d30a6ef47ec19a2b27",
"text": "This survey paper starts with a critical analysis of various performance metrics for supply chain management (SCM), used by a specific manufacturing company. Then it summarizes how economic theory treats multiple performance metrics. Actually, the paper proposes to deal with multiple metrics in SCM via the balanced scorecard — which measures customers, internal processes, innovations, and finance. To forecast how the values of these metrics will change — once a supply chain is redesigned — simulation may be used. This paper distinguishes four simulation types for SCM: (i) spreadsheet simulation, (ii) system dynamics, (iii) discrete-event simulation, and (iv) business games. These simulation types may explain the bullwhip effect, predict fill rate values, and educate and train users. Validation of simulation models requires sensitivity analysis; a statistical methodology is proposed. The paper concludes with suggestions for a possible research agenda in SCM. A list with 50 references for further study is included. Journal of the Operational Research Society (2003) 00, 000–000. doi:10.1057/palgrave.jors.2601539",
"title": ""
}
] | scidocsrr |
9833a2433885a7438b81d64f39712970 | Theoretical Design of Broadband Multisection Wilkinson Power Dividers With Arbitrary Power Split Ratio | [
{
"docid": "786d1ba82d326370684395eba5ef7cd3",
"text": "A miniaturized dual-band Wilkinson power divider with a parallel LC circuit at the midpoints of two coupled-line sections is proposed in this paper. General design equations for parallel inductor L and capacitor C are derived from even- and odd-mode analysis. Generally speaking, characteristic impedances between even and odd modes are different in two coupled-line sections, and their electrical lengths are also different in inhomogeneous medium. This paper proved that a parallel LC circuit compensates for the characteristic impedance differences and the electrical length differences for dual-band operation. In other words, the proposed model provides self-compensation structure, and no extra compensation circuits are needed. Moreover, the upper limit of the frequency ratio range can be adjusted by two coupling strengths, where loose coupling for the first coupled-line section and tight coupling for the second coupled-line section are preferred for a wider frequency ratio range. Finally, an experimental circuit shows good agreement with the theoretical simulation.",
"title": ""
}
] | [
{
"docid": "6850b52405e8056710f4b3010858cfbe",
"text": "spread of misinformation, rumors and hoaxes. The goal of this work is to introduce a simple modeling framework to study the diffusion of hoaxes and in particular how the availability of debunking information may contain their diffusion. As traditionally done in the mathematical modeling of information diffusion processes, we regard hoaxes as viruses: users can become infected if they are exposed to them, and turn into spreaders as a consequence. Upon verification, users can also turn into non-believers and spread the same attitude with a mechanism analogous to that of the hoax-spreaders. Both believers and non-believers, as time passes, can return to a susceptible state. Our model is characterized by four parameters: spreading rate, gullibility, probability to verify a hoax, and that to forget one's current belief. Simulations on homogeneous, heterogeneous, and real networks for a wide range of parameters values reveal a threshold for the fact-checking probability that guarantees the complete removal of the hoax from the network. Via a mean field approximation, we establish that the threshold value does not depend on the spreading rate but only on the gullibility and forgetting probability. Our approach allows to quantitatively gauge the minimal reaction necessary to eradicate a hoax.",
"title": ""
},
{
"docid": "28b2bbcfb8960ff40f2fe456a5b00729",
"text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation",
"title": ""
},
{
"docid": "ca990b1b43ca024366a2fe73e2a21dae",
"text": "Guanabenz (2,6-dichlorobenzylidene-amino-guanidine) is a centrally acting antihypertensive drug whose mechanism of action is via alpha2 adrenoceptors or, more likely, imidazoline receptors. Guanabenz is marketed as an antihypertensive agent in human medicine (Wytensin tablets, Wyeth Pharmaceuticals). Guanabenz has reportedly been administered to racing horses and is classified by the Association of Racing Commissioners International as a class 3 foreign substance. As such, its identification in a postrace sample may result in significant sanctions against the trainer of the horse. The present study examined liquid chromatographic/tandem quadrupole mass spectrometric (LC-MS/MS) detection of guanabenz in serum samples from horses treated with guanabenz by rapid i.v. injection at 0.04 and 0.2 mg/kg. Using a method adapted from previous work with clenbuterol, the parent compound was detected in serum with an apparent limit of detection of approximately 0.03 ng/ml and the limit of quantitation was 0.2 ng/ml. Serum concentrations of guanabenz peaked at approximately 100 ng/ml after the 0.2 mg/kg dose, and the parent compound was detected for up to 8 hours after the 0.04 mg/kg dose. Urine samples tested after administration of guanabenz at these dosages yielded evidence of at least one glucuronide metabolite, with the glucuronide ring apparently linked to a ring hydroxyl group or a guanidinium hydroxylamine. The LC-MS/MS results presented here form the basis of a confirmatory test for guanabenz in racing horses.",
"title": ""
},
{
"docid": "c4ab0d1934e5c2eb4fc16915f1868ab8",
"text": "During medicine studies, visualization of certain elements is common and indispensable in order to get more information about the way they work. Currently, we resort to the use of photographs -which are insufficient due to being staticor tests in patients, which can be invasive or even risky. Therefore, a low-cost approach is proposed by using a 3D visualization. This paper presents a holographic system built with low-cost materials for teaching obstetrics, where student interaction is performed by using voice and gestures. Our solution, which we called HoloMed, is focused on the projection of a euthocic normal delivery under a web-based infrastructure which also employs a Kinect. HoloMed is divided in three (3) essential modules: a gesture analyzer, a data server, and a holographic projection architecture, which can be executed in several interconnected computers using different network protocols. Tests used for determining the user’s position, illumination factors, and response times, demonstrate HoloMed’s effectiveness as a low-cost system for teaching, using a natural user interface and 3D images.",
"title": ""
},
{
"docid": "4a5c784fd5678666b57c841dfc26f5e8",
"text": "This paperdemonstratesa methodology tomodel and evaluatethe faulttolerancecharacteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major so_ problems and error distributions, we develop two leveis of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a dislributed environmenL Based oft the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated. Results show that I/O management and program flow control are the major sources of software problems in the measured IBM/MVS and VAX/VMS operating systems, while memory management is the major source of software problems in the TandeJn/GUARDIAN operating system. Software errors tend to occur in bursts on both IBM and VAX machines. This phenomemm islesspronounced in theTandem system,which can be attributed to its fault-tolerant design. The fault tolerance in the Tandem system reduces the loss of service due to software failures by an order of magnitude. Although the measured Tandem system is an experimental system working under accelerated stresses, the loss of service due to software problems is much smaller than that in the measured VAX/VMS and IBM/MVS systems. It is shown that the softwme Time To Error distributions obtained _rom data are not simple exponentials. This is in contrast with the conunon assumption of exponential failure times made in fanh-tolerant software models. Investigation of error conelatiom show that about 10% of software failures in the VAXcluster and 20% in the Tandem system occuned conctmeafly on multiple machines. The network-related software in the VAXcluster and the memory management software in the Tandem system are suspected to be software reliability bottlenecks for concurrent failures.",
"title": ""
},
{
"docid": "b27dc4a19b44bf2fd13f299de8c33108",
"text": "A large proportion of the world’s population lives in remote rural areas that are geographically isolated and sparsely populated. This paper proposed a hybrid power generation system suitable for remote area application. The concept of hybridizing renewable energy sources is that the base load is to be covered by largest and firmly available renewable source(s) and other intermittent source(s) should augment the base load to cover the peak load of an isolated mini electric grid system. The study is based on modeling, simulation and optimization of renewable energy system in rural area in Sundargarh district of Orissa state, India. The model has designed to provide an optimal system conFigureuration based on hour-by-hour data for energy availability and demands. Various renewable/alternative energy sources, energy storage and their applicability in terms of cost and performance are discussed. The homer software is used to study and design the proposed hybrid alternative energy power system model. The Sensitivity analysis was carried out using Homer program. Based on simulation results, it has been found that renewable/alternative energy sources will replace the conventional energy sources and would be a feasible solution for distribution of electric power for stand alone applications at remote and distant locations.",
"title": ""
},
{
"docid": "d0bacaa267599486356c175ca5419ede",
"text": "As P4 and its associated compilers move beyond relative immaturity, there is a need for common evaluation criteria. In this paper, we propose Whippersnapper, a set of benchmarks for P4. Rather than simply selecting a set of representative data-plane programs, the benchmark is designed from first principles, identifying and exploring key features and metrics. We believe the benchmark will not only provide a vehicle for comparing implementations and designs, but will also generate discussion within the larger community about the requirements for data-plane languages.",
"title": ""
},
{
"docid": "5399b924cdf1d034a76811360b6c018d",
"text": "Psychological construction models of emotion state that emotions are variable concepts constructed by fundamental psychological processes, whereas according to basic emotion theory, emotions cannot be divided into more fundamental units and each basic emotion is represented by a unique and innate neural circuitry. In a previous study, we found evidence for the psychological construction account by showing that several brain regions were commonly activated when perceiving different emotions (i.e. a general emotion network). Moreover, this set of brain regions included areas associated with core affect, conceptualization and executive control, as predicted by psychological construction models. Here we investigate directed functional brain connectivity in the same dataset to address two questions: 1) is there a common pathway within the general emotion network for the perception of different emotions and 2) if so, does this common pathway contain information to distinguish between different emotions? We used generalized psychophysiological interactions and information flow indices to examine the connectivity within the general emotion network. The results revealed a general emotion pathway that connects neural nodes involved in core affect, conceptualization, language and executive control. Perception of different emotions could not be accurately classified based on the connectivity patterns from the nodes of the general emotion pathway. Successful classification was achieved when connections outside the general emotion pathway were included. We propose that the general emotion pathway functions as a common pathway within the general emotion network and is involved in shared basic psychological processes across emotions. However, additional connections within the general emotion network are required to classify different emotions, consistent with a constructionist account.",
"title": ""
},
{
"docid": "64dc0a4b8392efc03b20fef7437eb55c",
"text": "This paper investigates how retailers at different stages of e-commerce maturity evaluate their entry to e-commerce activities. The study was conducted using qualitative approach interviewing 16 retailers in Saudi Arabia. It comes up with 22 factors that are believed the most influencing factors for retailers in Saudi Arabia. Interestingly, there seem to be differences between retailers in companies at different maturity stages in terms of having different attitudes regarding the issues of using e-commerce. The businesses that have reached a high stage of e-commerce maturity provide practical evidence of positive and optimistic attitudes and practices regarding use of e-commerce, whereas the businesses that have not reached higher levels of maturity provide practical evidence of more negative and pessimistic attitudes and practices. The study, therefore, should contribute to efforts leading to greater e-commerce development in Saudi Arabia and other countries with similar context.",
"title": ""
},
{
"docid": "c21c58dbdf413a54036ac5e6849f81e1",
"text": "We discuss the problem of extending data mining approaches to cases in which data points arise in the form of individual graphs. Being able to find the intrinsic low-dimensionality in ensembles of graphs can be useful in a variety of modeling contexts, especially when coarse-graining the detailed graph information is of interest. One of the main challenges in mining graph data is the definition of a suitable pairwise similarity metric in the space of graphs. We explore two practical solutions to solving this problem: one based on finding subgraph densities, and one using spectral information. The approach is illustrated on three test data sets (ensembles of graphs); two of these are obtained from standard graph generating algorithms, while the graphs in the third example are sampled as dynamic snapshots from an evolving network simulation.",
"title": ""
},
{
"docid": "7875910ad044232b4631ecacfec65656",
"text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ff4c2f1467a141894dbe76491bc06d3b",
"text": "Railways is the major means of transport in most of the countries. Rails are the backbone of the track structure and should be protected from defects. Surface defects are irregularities in the rails caused due to the shear stresses between the rails and wheels of the trains. This type of defects should be detected to avoid rail fractures. The objective of this paper is to propose an innovative technique to detect the surface defect on rail heads. In order to identify the defects, it is essential to extract the rails from the background and further enhance the image for thresholding. The proposed method uses Binary Image Based Rail Extraction (BIBRE) algorithm to extract the rails from the background. The extracted rails are enhanced to achieve uniform background with the help of direct enhancement method. The direct enhancement method enhance the image by enhancing the brightness difference between objects and their backgrounds. The enhanced rail image uses Gabor filters to identify the defects from the rails. The Gabor filters maximizes the energy difference between defect and defect less surface. Thresholding is done based on the energy of the defects. From the thresholded image the defects are identified and a message box is generated when there is a presence of defects.",
"title": ""
},
{
"docid": "c9cd19c2e8ee4b07f969280672d521bf",
"text": "The owner and users of a sensor network may be different, which necessitates privacy-preserving access control. On the one hand, the network owner need enforce strict access control so that the sensed data are only accessible to users willing to pay. On the other hand, users wish to protect their respective data access patterns whose disclosure may be used against their interests. This paper presents DP2AC, a Distributed Privacy-Preserving Access Control scheme for sensor networks, which is the first work of its kind. Users in DP2AC purchase tokens from the network owner whereby to query data from sensor nodes which will reply only after validating the tokens. The use of blind signatures in token generation ensures that tokens are publicly verifiable yet unlinkable to user identities, so privacy-preserving access control is achieved. A central component in DP2AC is to prevent malicious users from reusing tokens, for which we propose a suite of distributed token reuse detection (DTRD) schemes without involving the base station. These schemes share the essential idea that a sensor node checks with some other nodes (called witnesses) whether a token has been used, but they differ in how the witnesses are chosen. We thoroughly compare their performance with regard to TRD capability, communication overhead, storage overhead, and attack resilience. The efficacy and efficiency of DP2AC are confirmed by detailed performance evaluations.",
"title": ""
},
{
"docid": "21e17ad2d2a441940309b7eacd4dec6e",
"text": "ÐWith a huge amount of data stored in spatial databases and the introduction of spatial components to many relational or object-relational databases, it is important to study the methods for spatial data warehousing and OLAP of spatial data. In this paper, we study methods for spatial OLAP, by integration of nonspatial OLAP methods with spatial database implementation techniques. A spatial data warehouse model, which consists of both spatial and nonspatial dimensions and measures, is proposed. Methods for computation of spatial data cubes and analytical processing on such spatial data cubes are studied, with several strategies proposed, including approximation and selective materialization of the spatial objects resulted from spatial OLAP operations. The focus of our study is on a method for spatial cube construction, called object-based selective materialization, which is different from cuboid-based selective materialization proposed in previous studies of nonspatial data cube construction. Rather than using a cuboid as an atomic structure during the selective materialization, we explore granularity on a much finer level, that of a single cell of a cuboid. Several algorithms are proposed for object-based selective materialization of spatial data cubes and the performance study has demonstrated the effectiveness of these techniques. Index TermsÐData warehouse, data mining, online analytical processing (OLAP), spatial databases, spatial data analysis, spatial",
"title": ""
},
{
"docid": "b7bf3ae864ce774874041b0e5308323f",
"text": "This paper examines factors that influence prices of most common five cryptocurrencies such Bitcoin, Ethereum, Dash, Litecoin, and Monero over 20102018 using weekly data. The study employs ARDL technique and documents several findings. First, cryptomarket-related factors such as market beta, trading volume, and volatility appear to be significant determinant for all five cryptocurrencies both in shortand long-run. Second, attractiveness of cryptocurrencies also matters in terms of their price determination, but only in long-run. This indicates that formation (recognition) of the attractiveness of cryptocurrencies are subjected to time factor. In other words, it travels slowly within the market. Third, SP500 index seems to have weak positive long-run impact on Bitcoin, Ethereum, and Litcoin, while its sign turns to negative losing significance in short-run, except Bitcoin that generates an estimate of -0.20 at 10% significance level. Lastly, error-correction models for Bitcoin, Etherem, Dash, Litcoin, and Monero show that cointegrated series cannot drift too far apart, and converge to a longrun equilibrium at a speed of 23.68%, 12.76%, 10.20%, 22.91%, and 14.27% respectively.",
"title": ""
},
{
"docid": "85fc78cc3f71b784063b8b564e6509a9",
"text": "Numerous research papers have listed different vectors of personally identifiable information leaking via tradition al and mobile Online Social Networks (OSNs) and highlighted the ongoing aggregation of data about users visiting popular We b sites. We argue that the landscape is worsening and existing proposals (including the recent U.S. Federal Trade Commission’s report) do not address several key issues. We examined over 100 popular non-OSN Web sites across a number of categories where tens of millions of users representing d iverse demographics have accounts, to see if these sites leak private information to prominent aggregators. Our results raise considerable concerns: we see leakage in sites for every category we examined; fully 56% of the sites directly leak pieces of private information with this result growing to 75% if we also include leakage of a site userid. Sensitive search strings sent to healthcare Web sites and travel itineraries on flight reservation sites are leaked in 9 of the top 10 sites studied for each category. The community needs a clear understanding of the shortcomings of existing privac y protection measures and the new proposals. The growing disconnect between the protection measures and increasing leakage and linkage suggests that we need to move beyond the losing battle with aggregators and examine what roles first-party sites can play in protecting privacy of their use rs.",
"title": ""
},
{
"docid": "587f7821fc7ecfe5b0bbbd3b08b9afe2",
"text": "The most commonly used method for cuffless blood pressure (BP) measurement is using pulse transit time (PTT), which is based on Moens-Korteweg (M-K) equation underlying the assumption that arterial geometries such as the arterial diameter keep unchanged. However, the arterial diameter is dynamic which varies over the cardiac cycle, and it is regulated through the contraction or relaxation of the vascular smooth muscle innervated primarily by the sympathetic nervous system. This may be one of the main reasons that impair the BP estimation accuracy. In this paper, we propose a novel indicator, the photoplethysmogram (PPG) intensity ratio (PIR), to evaluate the arterial diameter change. The deep breathing (DB) maneuver and Valsalva maneuver (VM) were performed on five healthy subjects for assessing parasympathetic and sympathetic nervous activities, respectively. Heart rate (HR), PTT, PIR and BP were measured from the simultaneously recorded electrocardiogram (ECG), PPG, and continuous BP. It was found that PIR increased significantly from inspiration to expiration during DB, whilst BP dipped correspondingly. Nevertheless, PIR changed positively with BP during VM. In addition, the spectral analysis revealed that the dominant frequency component of PIR, HR and SBP, shifted significantly from high frequency (HF) to low frequency (LF), but not obvious in that of PTT. These results demonstrated that PIR can be potentially used to evaluate the smooth muscle tone which modulates arterial BP in the LF range. The PTT-based BP measurement that take into account the PIR could therefore improve its estimation accuracy.",
"title": ""
},
{
"docid": "9b17c6ff30e91f88e52b2db4eb331478",
"text": "Network traffic classification has become significantly important with rapid growth of current Internet network and online applications. There have been numerous studies on this topic which have led to many different approaches. Most of these approaches use predefined features extracted by an expert in order to classify network traffic. In contrast, in this study, we propose a deep learning based approach which integrates both feature extraction and classification phases into one system. Our proposed scheme, called “Deep Packet,” can handle both traffic characterization, in which the network traffic is categorized into major classes (e.g., FTP and P2P), and application identification in which identification of end-user applications (e.g., BitTorrent and Skype) is desired. Contrary to the most of current methods, Deep Packet can identify encrypted traffic and also distinguishes between VPN and non-VPN network traffic. After an initial pre-processing phase on data, packets are fed into Deep Packet framework that embeds stacked autoencoder and convolution neural network (CNN) in order to classify network traffic. Deep packet with CNN as its classification model achieved F1 score of 0.95 in application identification task and it also accomplished F1 score of 0.97 in traffic characterization task. To the best of our knowledge, Deep Packet outperforms all of the proposed classification methods on UNB ISCX VPN-nonVPN dataset.",
"title": ""
},
{
"docid": "9fd5e182851ff0be67e8865c336a1f77",
"text": "Following the developments of wireless and mobile communication technologies, mobile-commerce (M-commerce) has become more and more popular. However, most of the existing M-commerce protocols do not consider the user anonymity during transactions. This means that it is possible to trace the identity of a payer from a M-commerce transaction. Luo et al. in 2014 proposed an NFC-based anonymous mobile payment protocol. It used an NFC-enabled smartphone and combined a built-in secure element (SE) as a trusted execution environment to build an anonymous mobile payment service. But their scheme has several problems and cannot be functional in practice. In this paper, we introduce a new NFC-based anonymous mobile payment protocol. Our scheme has the following features:(1) Anonymity. It prevents the disclosure of user's identity by using virtual identities instead of real identity during the transmission. (2) Efficiency. Confidentiality is achieved by symmetric key cryptography instead of public key cryptography so as to increase the performance. (3) Convenience. The protocol is based on NFC and is EMV compatible. (4) Security. All the transaction is either encrypted or signed by the sender so the confidentiality and authenticity are preserved.",
"title": ""
},
{
"docid": "3d04155f68912f84b02788f93e9da74c",
"text": "Data partitioning significantly improves the query performance in distributed database systems. A large number of techniques have been proposed to efficiently partition a dataset for a given query workload. However, many modern analytic applications involve ad-hoc or exploratory analysis where users do not have a representative query workload upfront. Furthermore, workloads change over time as businesses evolve or as analysts gain better understanding of their data. Static workload-based data partitioning techniques are therefore not suitable for such settings. In this paper, we describe the demonstration of Amoeba, a distributed storage system which uses adaptive multi-attribute data partitioning to efficiently support ad-hoc as well as recurring queries. Amoeba applies a robust partitioning algorithm such that ad-hoc queries on all attributes have similar performance gains. Thereafter, Amoeba adaptively repartitions the data based on the observed query sequence, i.e., the system improves over time. All along Amoeba offers both adaptivity (i.e., adjustments according to workload changes) as well as robustness (i.e., avoiding performance spikes due to workload changes). We propose to demonstrate Amoeba on scenarios from an internet-ofthings startup that tracks user driving patterns. We invite the audience to interactively fire fast ad-hoc queries, observe multi-dimensional adaptivity, and play with a robust/reactive knob in Amoeba. The web front end displays the layout changes, runtime costs, and compares it to Spark with both default and workload-aware partitioning.",
"title": ""
}
] | scidocsrr |
4aa78772e58fc845b17d5a6588b80637 | 3D feature points detection on sparse and non-uniform pointcloud for SLAM | [
{
"docid": "33915af49384d028a591d93336feffd6",
"text": "This paper presents a new approach for recognition of 3D objects that are represented as 3D point clouds. We introduce a new 3D shape descriptor called Intrinsic Shape Signature (ISS) to characterize a local/semi-local region of a point cloud. An intrinsic shape signature uses a view-independent representation of the 3D shape to match shape patches from different views directly, and a view-dependent transform encoding the viewing geometry to facilitate fast pose estimation. In addition, we present a highly efficient indexing scheme for the high dimensional ISS shape descriptors, allowing for fast and accurate search of large model databases. We evaluate the performance of the proposed algorithm on a very challenging task of recognizing different vehicle types using a database of 72 models in the presence of sensor noise, obscuration and scene clutter.",
"title": ""
}
] | [
{
"docid": "938afbc53340a3aa6e454d17789bf021",
"text": "BACKGROUND\nAll cultural groups in the world place paramount value on interpersonal trust. Existing research suggests that although accurate judgments of another's trustworthiness require extensive interactions with the person, we often make trustworthiness judgments based on facial cues on the first encounter. However, little is known about what facial cues are used for such judgments and what the bases are on which individuals make their trustworthiness judgments.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn the present study, we tested the hypothesis that individuals may use facial attractiveness cues as a \"shortcut\" for judging another's trustworthiness due to the lack of other more informative and in-depth information about trustworthiness. Using data-driven statistical models of 3D Caucasian faces, we compared facial cues used for judging the trustworthiness of Caucasian faces by Caucasian participants who were highly experienced with Caucasian faces, and the facial cues used by Chinese participants who were unfamiliar with Caucasian faces. We found that Chinese and Caucasian participants used similar facial cues to judge trustworthiness. Also, both Chinese and Caucasian participants used almost identical facial cues for judging trustworthiness and attractiveness.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe results suggest that without opportunities to interact with another person extensively, we use the less racially specific and more universal attractiveness cues as a \"shortcut\" for trustworthiness judgments.",
"title": ""
},
{
"docid": "b017fd773265c73c7dccad86797c17b8",
"text": "Active learning, which has a strong impact on processing data prior to the classification phase, is an active research area within the machine learning community, and is now being extended for remote sensing applications. To be effective, classification must rely on the most informative pixels, while the training set should be as compact as possible. Active learning heuristics provide capability to select unlabeled data that are the “most informative” and to obtain the respective labels, contributing to both goals. Characteristics of remotely sensed image data provide both challenges and opportunities to exploit the potential advantages of active learning. We present an overview of active learning methods, then review the latest techniques proposed to cope with the problem of interactive sampling of training pixels for classification of remotely sensed data with support vector machines (SVMs). We discuss remote sensing specific approaches dealing with multisource and spatially and time-varying data, and provide examples for high-dimensional hyperspectral imagery.",
"title": ""
},
{
"docid": "e7bf90ed4a5b4a509f41a7afc7ffde1e",
"text": "Previous theorizing by clinical psychologists suggests that adolescent narcissism may be related to parenting practices (Kernberg, 1975; Kohut, 1977). Two studies investigated the relations between parenting dimensions (i.e., warmth, monitoring, and psychological control) and narcissism both with and without removing from narcissism variance associated with trait self-esteem. Two hundred and twenty-two college students (Study 1) and 212 high school students (Study 2) completed the Narcissistic Personality Inventory, a trait self-esteem scale, and standard measures of the three parenting dimensions. Parental warmth was associated positively and monitoring was associated negatively with both types of narcissism. Psychological control was positively associated with narcissism scores from which trait self-esteem variance had been removed. Clinical implications of the findings are discussed, limitations are addressed, and future research directions are suggested.",
"title": ""
},
{
"docid": "7621e0dcdad12367dc2cfcd12d31c719",
"text": "Microblogging sites have emerged as major platforms for bloggers to create and consume posts as well as to follow other bloggers and get informed of their updates. Due to the large number of users, and the huge amount of posts they create, it becomes extremely difficult to identify relevant and interesting blog posts. In this paper, we propose a novel convex collective matrix completion (CCMC) method that effectively utilizes user-item matrix and incorporates additional user activity and topic-based signals to recommend relevant content. The key advantage of CCMC over existing methods is that it can obtain a globally optimal solution and can easily scale to large-scale matrices using Hazan’s algorithm. To the best of our knowledge, this is the first work which applies and studies CCMC as a recommendation method in social media. We conduct a large scale study and show significant improvement over existing state-ofthe-art approaches.",
"title": ""
},
{
"docid": "bd516d0b64e483d2210b20e4905ecd52",
"text": "With the rapid growth of the internet and the spread of the information contained therein, the volume of information available on the web is more than the ability of users to manage, capture and keep the information up to date. One solution to this problem are personalization and recommender systems. Recommender systems use the comments of the group of users so that, to help people in that group more effectively to identify their favorite items from a huge set of choices. In recent years, the web has seen very strong growth in the use of blogs. Considering the high volume of information in blogs, bloggers are in trouble to find the desired information and find blogs with similar thoughts and desires. Therefore, considering the mass of information for the blogs, a blog recommender system seems to be necessary. In this paper, by combining different methods of clustering and collaborative filtering, personalized recommender system for Persian blogs is suggested.",
"title": ""
},
{
"docid": "d6cf367f29ed1c58fb8fd0b7edf69458",
"text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.",
"title": ""
},
{
"docid": "2b23723ab291aeff31781cba640b987b",
"text": "As the urban population is increasing, more and more cars are circulating in the city to search for parking spaces which contributes to the global problem of traffic congestion. To alleviate the parking problems, smart parking systems must be implemented. In this paper, the background on parking problems is introduced and relevant algorithms, systems, and techniques behind the smart parking are reviewed and discussed. This paper provides a good insight into the guidance, monitoring and reservations components of the smart car parking and directions to the future development.",
"title": ""
},
{
"docid": "c63fa63e8af9d5b25ca7f40a710cfcc2",
"text": "With the recent development of deep learning, research in AI has gained new vigor and prominence. While machine learning has succeeded in revitalizing many research fields, such as computer vision, speech recognition, and medical diagnosis, we are yet to witness impressive progress in natural language understanding. One of the reasons behind this unmatched expectation is that, while a bottom-up approach is feasible for pattern recognition, reasoning and understanding often require a top-down approach. In this work, we couple sub-symbolic and symbolic AI to automatically discover conceptual primitives from text and link them to commonsense concepts and named entities in a new three-level knowledge representation for sentiment analysis. In particular, we employ recurrent neural networks to infer primitives by lexical substitution and use them for grounding common and commonsense knowledge by means of multi-dimensional scaling.",
"title": ""
},
{
"docid": "7f7a67af972d26746ce1ae0c7ec09499",
"text": "We describe Microsoft's conversational speech recognition system, in which we combine recent developments in neural-network-based acoustic and language modeling to advance the state of the art on the Switchboard recognition task. Inspired by machine learning ensemble techniques, the system uses a range of convolutional and recurrent neural networks. I-vector modeling and lattice-free MMI training provide significant gains for all acoustic model architectures. Language model rescoring with multiple forward and backward running RNNLMs, and word posterior-based system combination provide a 20% boost. The best single system uses a ResNet architecture acoustic model with RNNLM rescoring, and achieves a word error rate of 6.9% on the NIST 2000 Switchboard task. The combined system has an error rate of 6.2%, representing an improvement over previously reported results on this benchmark task.",
"title": ""
},
{
"docid": "f31555cb1720843ec4921428dc79449e",
"text": "Software architectures shift developers’ focus from lines-of-code to coarser-grained architectural elements and their interconnection structure. Architecture description languages (ADLs) have been proposed as modeling notations to support architecture-based development. There is, however, little consensus in the research community on what is an ADL, what aspects of an architecture should be modeled in an ADL, and which ADL is best suited for a particular problem. Furthermore, the distinction is rarely made between ADLs on one hand and formal specification, module interconnection, simulation, and programming languages on the other. This paper attempts to provide an answer to these questions. It motivates and presents a definition and a classification framework for ADLs. The utility of the definition is demonstrated by using it to differentiate ADLs from other modeling notations. The framework is used to classify and compare several existing ADLs.1",
"title": ""
},
{
"docid": "25a94dbd1c02a6183df945d4684a0f31",
"text": "The success of applying policy gradient reinforcement learning (RL) to difficult control tasks hinges crucially on the ability to determine a sensible initialization for the policy. Transfer learning methods tackle this problem by reusing knowledge gleaned from solving other related tasks. In the case of multiple task domains, these algorithms require an inter-task mapping to facilitate knowledge transfer across domains. However, there are currently no general methods to learn an inter-task mapping without requiring either background knowledge that is not typically present in RL settings, or an expensive analysis of an exponential number of inter-task mappings in the size of the state and action spaces. This paper introduces an autonomous framework that uses unsupervised manifold alignment to learn intertask mappings and effectively transfer samples between different task domains. Empirical results on diverse dynamical systems, including an application to quadrotor control, demonstrate its effectiveness for cross-domain transfer in the context of policy gradient RL. Introduction Policy gradient reinforcement learning (RL) algorithms have been applied with considerable success to solve highdimensional control problems, such as those arising in robotic control and coordination (Peters & Schaal 2008). These algorithms use gradient ascent to tune the parameters of a policy to maximize its expected performance. Unfortunately, this gradient ascent procedure is prone to becoming trapped in local maxima, and thus it has been widely recognized that initializing the policy in a sensible manner is crucial for achieving optimal performance. For instance, one typical strategy is to initialize the policy using human demonstrations (Peters & Schaal 2006), which may be infeasible when the task cannot be easily solved by a human. This paper explores a different approach: instead of initializing the policy at random (i.e., tabula rasa) or via human demonstrations, we instead use transfer learning (TL) to initialize the policy for a new target domain based on knowledge from one or more source tasks. In RL transfer, the source and target tasks may differ in their formulations (Taylor & Stone 2009). In particular, Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. when the source and target tasks have different state and/or action spaces, an inter-task mapping (Taylor et al. 2007a) that describes the relationship between the two tasks is typically needed. This paper introduces a framework for autonomously learning an inter-task mapping for cross-domain transfer in policy gradient RL. First, we learn an inter-state mapping (i.e., a mapping between states in two tasks) using unsupervised manifold alignment. Manifold alignment provides a powerful and general framework that can discover a shared latent representation to capture intrinsic relations between different tasks, irrespective of their dimensionality. The alignment also yields an implicit inter-action mapping that is generated by mapping tracking states from the source to the target. Given the mapping between task domains, source task trajectories are then used to initialize a policy in the target task, significantly improving the speed of subsequent learning over an uninformed initialization. This paper provides the following contributions. First, we introduce a novel unsupervised method for learning interstate mappings using manifold alignment. Second, we show that the discovered subspace can be used to initialize the target policy. Third, our empirical validation conducted on four dissimilar and dynamically chaotic task domains (e.g., controlling a three-link cart-pole and a quadrotor aerial vehicle) shows that our approach can a) automatically learn an inter-state mapping across MDPs from the same domain, b) automatically learn an inter-state mapping across MDPs from very different domains, and c) transfer informative initial policies to achieve higher initial performance and reduce the time needed for convergence to near-optimal behavior.",
"title": ""
},
{
"docid": "afabc44116cc1141c00c3528f1509c18",
"text": "Low-rank representation (LRR) has recently attracted a great deal of attention due to its pleasing efficacy in exploring low-dimensional subspace structures embedded in data. For a given set of observed data corrupted with sparse errors, LRR aims at learning a lowest-rank representation of all data jointly. LRR has broad applications in pattern recognition, computer vision and signal processing. In the real world, data often reside on low-dimensional manifolds embedded in a high-dimensional ambient space. However, the LRR method does not take into account the non-linear geometric structures within data, thus the locality and similarity information among data may be missing in the learning process. To improve LRR in this regard, we propose a general Laplacian regularized low-rank representation framework for data representation where a hypergraph Laplacian regularizer can be readily introduced into, i.e., a Non-negative Sparse Hyper-Laplacian regularized LRR model (NSHLRR). By taking advantage of the graph regularizer, our proposed method not only can represent the global low-dimensional structures, but also capture the intrinsic non-linear geometric information in data. The extensive experimental results on image clustering, semi-supervised image classification and dimensionality reduction tasks demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "1448b02c9c14e086a438d76afa1b2fde",
"text": "This paper analyzes the classification of hyperspectral remote sensing images with linear discriminant analysis (LDA) in the presence of a small ratio between the number of training samples and the number of spectral features. In these particular ill-posed problems, a reliable LDA requires one to introduce regularization for problem solving. Nonetheless, in such a challenging scenario, the resulting regularized LDA (RLDA) is highly sensitive to the tuning of the regularization parameter. In this context, we introduce in the remote sensing community an efficient version of the RLDA recently presented by Ye to cope with critical ill-posed problems. In addition, several LDA-based classifiers (i.e., penalized LDA, orthogonal LDA, and uncorrelated LDA) are compared theoretically and experimentally with the standard LDA and the RLDA. Method differences are highlighted through toy examples and are exhaustively tested on several ill-posed problems related to the classification of hyperspectral remote sensing images. Experimental results confirm the effectiveness of the presented RLDA technique and point out the main properties of other analyzed LDA techniques in critical ill-posed hyperspectral image classification problems.",
"title": ""
},
{
"docid": "329343cec99c221e6f6ce8e3f1dbe83f",
"text": "Artificial Neural Networks (ANN) play a very vital role in making stock market predictions. As per the literature survey, various researchers have used various approaches to predict the prices of stock market. Some popular approaches used by researchers are Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic, Auto Regressive Models and Support Vector Machines. This study presents ANN based computational approach for predicting the one day ahead closing prices of companies from the three different sectors:IT Sector (Wipro, TCS and Infosys), Automobile Sector (Maruti Suzuki Ltd.) and Banking Sector (ICICI Bank). Different types of artificial neural networks based models like Back Propagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), Generalized Regression Neural Network (GRNN) and Layer Recurrent Neural Network (LRNN) have been studied and used to forecast the short term and long term share prices of Wipro, TCS, Infosys, Maruti Suzuki and ICICI Bank. All the networks were trained with the 1100 days of trading data and predicted the prices up to next 6 months. Predicted output was generated through available historical data. Experimental results show that BPNN model gives minimum error (MSE) as compared to the RBFNN and GRNN models. GRNN model performs better as compared to RBFNN model. Forecasting performance of LRNN model is found to be much better than other three models. Keywordsartificial intelligence, back propagation, mean square error, artificial neural network.",
"title": ""
},
{
"docid": "0ac9ad839f21bd03342dd786b09155fe",
"text": "Graphs are fundamental data structures which concisely capture the relational structure in many important real-world domains, such as knowledge graphs, physical and social interactions, language, and chemistry. Here we introduce a powerful new approach for learning generative models over graphs, which can capture both their structure and attributes. Our approach uses graph neural networks to express probabilistic dependencies among a graph’s nodes and edges, and can, in principle, learn distributions over any arbitrary graph. In a series of experiments our results show that once trained, our models can generate good quality samples of both synthetic graphs as well as real molecular graphs, both unconditionally and conditioned on data. Compared to baselines that do not use graph-structured representations, our models often perform far better. We also explore key challenges of learning generative models of graphs, such as how to handle symmetries and ordering of elements during the graph generation process, and offer possible solutions. Our work is the first and most general approach for learning generative models over arbitrary graphs, and opens new directions for moving away from restrictions of vectorand sequence-like knowledge representations, toward more expressive and flexible relational data structures.",
"title": ""
},
{
"docid": "33fe68214ea062f2cdb310a74a9d6d8b",
"text": "In this study, the authors examine the relationship between abusive supervision and employee workplace deviance. The authors conceptualize abusive supervision as a type of aggression. They use work on retaliation and direct and displaced aggression as a foundation for examining employees' reactions to abusive supervision. The authors predict abusive supervision will be related to supervisor-directed deviance, organizational deviance, and interpersonal deviance. Additionally, the authors examine the moderating effects of negative reciprocity beliefs. They hypothesized that the relationship between abusive supervision and supervisor-directed deviance would be stronger when individuals hold higher negative reciprocity beliefs. The results support this hypothesis. The implications of the results for understanding destructive behaviors in the workplace are examined.",
"title": ""
},
{
"docid": "777f87414c0185739a92bbdb0f6aa994",
"text": "Limb apraxia (LA), is a neuropsychological syndrome characterized by difficulty in performing gestures and may therefore be an ideal model for investigating whether action execution deficits are causatively linked to deficits in action understanding. We tested 33 left brain-damaged patients and 8 right brain-damaged patients for the presence of the LA. Importantly, we also tested all the patients in an ad hoc developed gesture recognition task wherein an actor performs, either correctly or incorrectly, transitive (using objects) or intransitive (without objects) meaningful conventional limb gestures. Patients were instructed to judge whether the observed gesture was correct or incorrect. Lesion analysis enabled us to evaluate the relationship between specific brain regions and behavioral performance in gesture execution and gesture comprehension. We found that LA was present in 21 left brain-damaged patients and it was linked to frontal and parietal lesions. Moreover, we found that recognition of correct execution of familiar gestures performed by others was more impaired in patients with LA than in nonapraxic patients. Crucially, the gesture comprehension deficit correlated with damage to the opercular and triangularis portions of the inferior frontal gyrus, two regions that are involved in complex aspects of action-related processing. In contrast, no such relationship was observed with lesions centered on the inferior parietal cortex. The present findings suggest that lesions to left frontal regions that are involved in planning and performing actions are causatively associated with deficits in the recognition of the correct execution of meaningful gestures.",
"title": ""
},
{
"docid": "c3e8960170cb72f711263e7503a56684",
"text": "BACKGROUND\nThe deltoid ligament has both superficial and deep layers and consists of up to six ligamentous bands. The prevalence of the individual bands is variable, and no consensus as to which bands are constant or variable exists. Although other studies have looked at the variance in the deltoid anatomy, none have quantified the distance to relevant osseous landmarks.\n\n\nMETHODS\nThe deltoid ligaments from fourteen non-paired, fresh-frozen cadaveric specimens were isolated and the ligamentous bands were identified. The lengths, footprint areas, orientations, and distances from relevant osseous landmarks were measured with a three-dimensional coordinate measurement device.\n\n\nRESULTS\nIn all specimens, the tibionavicular, tibiospring, and deep posterior tibiotalar ligaments were identified. Three additional bands were variable in our specimen cohort: the tibiocalcaneal, superficial posterior tibiotalar, and deep anterior tibiotalar ligaments. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament. The origins from the distal center of the intercollicular groove were 16.1 mm (95% confidence interval, 14.7 to 17.5 mm) for the tibionavicular ligament, 13.1 mm (95% confidence interval, 11.1 to 15.1 mm) for the tibiospring ligament, and 7.6 mm (95% confidence interval, 6.7 to 8.5 mm) for the deep posterior tibiotalar ligament. Relevant to other pertinent osseous landmarks, the tibionavicular ligament inserted at 9.7 mm (95% confidence interval, 8.4 to 11.0 mm) from the tuberosity of the navicular, the tibiospring inserted at 35% (95% confidence interval, 33.4% to 36.6%) of the spring ligament's posteroanterior distance, and the deep posterior tibiotalar ligament inserted at 17.8 mm (95% confidence interval, 16.3 to 19.3 mm) from the posteromedial talar tubercle.\n\n\nCONCLUSIONS\nThe tibionavicular, tibiospring, and deep posterior tibiotalar ligament bands were constant components of the deltoid ligament. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament.\n\n\nCLINICAL RELEVANCE\nThe anatomical data regarding the deltoid ligament bands in this study will help to guide anatomical placement of repairs and reconstructions for deltoid ligament injury or instability.",
"title": ""
},
{
"docid": "8e3b73204d1d62337c4b2aabdbaa8973",
"text": "The goal of this paper is to analyze the geometric properties of deep neural network classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary. Through a systematic empirical investigation, we show that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions. We further draw an essential connection between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations in the inputs, and the curvature of their decision boundary. The directions where the decision boundary is curved in fact characterize the directions to which the classifier is the most vulnerable. We finally leverage a fundamental asymmetry in the curvature of the decision boundary of deep nets, and propose a method to discriminate between original images, and images perturbed with small adversarial examples. We show the effectiveness of this purely geometric approach for detecting small adversarial perturbations in images, and for recovering the labels of perturbed images.",
"title": ""
}
] | scidocsrr |
f3a36f7f89361cd5e6f8e3630ba9b856 | Vision-Based Real-Time Aerial Object Localization and Tracking for UAV Sensing System | [
{
"docid": "79d6aa27e761b25348481ffed15a8bd9",
"text": "Correlation filter (CF) based trackers have recently gained a lot of popularity due to their impressive performance on benchmark datasets, while maintaining high frame rates. A significant amount of recent research focuses on the incorporation of stronger features for a richer representation of the tracking target. However, this only helps to discriminate the target from background within a small neighborhood. In this paper, we present a framework that allows the explicit incorporation of global context within CF trackers. We reformulate the original optimization problem and provide a closed form solution for single and multi-dimensional features in the primal and dual domain. Extensive experiments demonstrate that this framework significantly improves the performance of many CF trackers with only a modest impact on frame rate.",
"title": ""
},
{
"docid": "72bbc123119afa92f652d0a5332671e9",
"text": "Both detection and tracking people are challenging problems, especially in complex real world scenes that commonly involve multiple people, complicated occlusions, and cluttered or even moving backgrounds. People detectors have been shown to be able to locate pedestrians even in complex street scenes, but false positives have remained frequent. The identification of particular individuals has remained challenging as well. Tracking methods are able to find a particular individual in image sequences, but are severely challenged by real-world scenarios such as crowded street scenes. In this paper, we combine the advantages of both detection and tracking in a single framework. The approximate articulation of each person is detected in every frame based on local features that model the appearance of individual body parts. Prior knowledge on possible articulations and temporal coherency within a walking cycle are modeled using a hierarchical Gaussian process latent variable model (hGPLVM). We show how the combination of these results improves hypotheses for position and articulation of each person in several subsequent frames. We present experimental results that demonstrate how this allows to detect and track multiple people in cluttered scenes with reoccurring occlusions.",
"title": ""
}
] | [
{
"docid": "0a67326bde22c1a9b2d6407d141d2a7a",
"text": "BACKGROUND\nReducing fruit and vegetable (F&V) prices is a frequently considered policy to improve dietary habits in the context of health promotion. However, evidence on the effectiveness of this intervention is limited.\n\n\nOBJECTIVE\nThe objective was to examine the effects of a 50% price discount on F&Vs or nutrition education or a combination of both on supermarket purchases.\n\n\nDESIGN\nA 6-mo randomized controlled trial within Dutch supermarkets was conducted. Regular supermarket shoppers were randomly assigned to 1 of 4 conditions: 50% price discounts on F&Vs, nutrition education, 50% price discounts plus nutrition education, or no intervention. A total of 199 participants provided baseline data; 151 (76%) were included in the final analysis. F&V purchases were measured by using supermarket register receipts at baseline, at 1 mo after the start of the intervention, at 3 mo, at 6 mo (end of the intervention period), and 3 mo after the intervention ended (9 mo).\n\n\nRESULTS\nAdjusted multilevel models showed significantly higher F&V purchases (per household/2 wk) as a result of the price discount (+3.9 kg; 95% CI: 1.5, 6.3 kg) and the discount plus education intervention (+5.6 kg; 95% CI: 3.2, 7.9 kg) at 6 mo compared with control. Moreover, the percentage of participants who consumed recommended amounts of F&Vs (≥400 g/d) increased from 42.5% at baseline to 61.3% at 6 mo in both discount groups (P = 0.03). Education alone had no significant effect.\n\n\nCONCLUSIONS\nDiscounting F&Vs is a promising intervention strategy because it resulted in substantially higher F&V purchases, and no adverse effects were observed. Therefore, pricing strategies form an important focus for future interventions or policy. However, the long-term effects and the ultimate health outcomes require further investigation. This trial was registered at the ISRCTN Trial Register as number ISRCTN56596945 and at the Dutch Trial Register (http://www.trialregister.nl/trialreg/index.asp) as number NL22568.029.08.",
"title": ""
},
{
"docid": "ab1c7ede012bd20f30bab66fcaec49fa",
"text": "Visual-inertial navigation systems (VINS) have prevailed in various applications, in part because of the complementary sensing capabilities and decreasing costs as well as sizes. While many of the current VINS algorithms undergo inconsistent estimation, in this paper we introduce a new extended Kalman filter (EKF)-based approach towards consistent estimates. To this end, we impose both state-transition and obervability constraints in computing EKF Jacobians so that the resulting linearized system can best approximate the underlying nonlinear system. Specifically, we enforce the propagation Jacobian to obey the semigroup property, thus being an appropriate state-transition matrix. This is achieved by parametrizing the orientation error state in the global, instead of local, frame of reference, and then evaluating the Jacobian at the propagated, instead of the updated, state estimates. Moreover, the EKF linearized system ensures correct observability by projecting the most-accurate measurement Jacobian onto the observable subspace so that no spurious information is gained. The proposed algorithm is validated by both Monte-Carlo simulation and real-world experimental tests.",
"title": ""
},
{
"docid": "a8850b2f95fbd8f5904b174dd1a556c3",
"text": "Cube, a massively-parallel FPGA-based platform is presented. The machine is made from boards each containing 64 FPGA devices and eight boards can be connected in a cube structure for a total of 512 FPGA devices. With high bandwidth systolic inter-FPGA communication and a flexible programming scheme, the result is a low power, high density and scalable supercomputing machine suitable for various large scale parallel applications. A RC4 key search engine was built as an demonstration application. In a fully implemented Cube, the engine can perform a full search on the 40-bit key space within 3 minutes, this being 359 times faster than a multi-threaded software implementation running on a 2.5GHz Intel Quad-Core Xeon processor.",
"title": ""
},
{
"docid": "01ea3bf8f7694f76b486265edbdeb834",
"text": "We deepen and extend resource-level theorizing about sustainable competitive advantage by developing a formal model of resource development in competitive markets. Our model incorporates three important barriers to imitation: time compression diseconomies, causal ambiguity and the magnitude of ...xed investments. Time compression diseconomies are derived from a micro-model of resource development with diminishing returns to effort. We characterize two dimensions of sustainability: whether a resource is imitable and how long imitation takes. We identify conditions under which competitive advantage does not lead to superior performance and show that an imitator can sometimes bene...t from increases in causal ambiguity. Despite recent criticisms, we rea¢rm the usefulness of a resource-level of analysis, especially when the focus is on resources developed through internal projects with identi...able stopping times.",
"title": ""
},
{
"docid": "3f0b6a3238cf60d7e5d23363b2affe95",
"text": "This paper presents a new strategy to control the generated power that comes from the energy sources existing in autonomous and isolated Microgrids. In this particular study, the power system consists of a power electronic converter supplied by a battery bank, which is used to form the AC grid (grid former converter), an energy source based on a wind turbine with its respective power electronic converter (grid supplier converter), and the power consumers (loads). The main objective of this proposed strategy is to control the state of charge of the battery bank limiting the voltage on its terminals by controlling the power generated by the energy sources. This is done without using dump loads or any physical communication among the power electronic converters or the individual energy source controllers. The electrical frequency of the microgrid is used to inform to the power sources and their respective converters the amount of power they need to generate in order to maintain the battery-bank state of charge below or equal its maximum allowable limit. It is proposed a modified droop control to implement this task.",
"title": ""
},
{
"docid": "369cdea246738d5504669e2f9581ae70",
"text": "Content Security Policy (CSP) is an emerging W3C standard introduced to mitigate the impact of content injection vulnerabilities on websites. We perform a systematic, large-scale analysis of four key aspects that impact on the effectiveness of CSP: browser support, website adoption, correct configuration and constant maintenance. While browser support is largely satisfactory, with the exception of few notable issues, our analysis unveils several shortcomings relative to the other three aspects. CSP appears to have a rather limited deployment as yet and, more crucially, existing policies exhibit a number of weaknesses and misconfiguration errors. Moreover, content security policies are not regularly updated to ban insecure practices and remove unintended security violations. We argue that many of these problems can be fixed by better exploiting the monitoring facilities of CSP, while other issues deserve additional research, being more rooted into the CSP design.",
"title": ""
},
{
"docid": "f985b4db1646afdd014b2668267e947f",
"text": "The encode-decoder framework has shown recent success in image captioning. Visual attention, which is good at detailedness, and semantic attention, which is good at comprehensiveness, have been separately proposed to ground the caption on the image. In this paper, we propose the Stepwise Image-Topic Merging Network (simNet) that makes use of the two kinds of attention at the same time. At each time step when generating the caption, the decoder adaptively merges the attentive information in the extracted topics and the image according to the generated context, so that the visual information and the semantic information can be effectively combined. The proposed approach is evaluated on two benchmark datasets and reaches the state-of-the-art performances.1",
"title": ""
},
{
"docid": "26992fcd5b560f11eb388d27d51527e9",
"text": "The concept of digital twin, a kind of virtual things with the precise states of the corresponding physical systems, is suggested by industrial domains to accurately estimate the status and predict the operation of machines. Digital twin can be used for development of critical systems, such as self-driving cars and auto-production factories. There, however, will be so different digital twins in terms of resolution, complexity, modelling languages and formats. It is required to cooperate heterogeneous digital twins in standardized ways. Since a centralized digital twin system uses too big resources and energies, it is preferable to make large-scale digital twin system geographically and logically distributed over the Internet. In addition, efficient interworking functions between digital twins and the physical systems are required also. In this paper, we propose a novel architecture of large-scale digital twin platform including distributed digital twin cooperation framework, flexible data-centric communication middleware, and the platform based digital twin application to develop a reliable advanced driver assistance system.",
"title": ""
},
{
"docid": "b6da971f13c1075ce1b4aca303e7393f",
"text": "In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining/fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset.",
"title": ""
},
{
"docid": "4cf2c80fe55f2b41816f23895b64a29c",
"text": "Visual question answering is fundamentally compositional in nature—a question like where is the dog? shares substructure with questions like what color is the dog? and where is the cat? This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning neural module networks, which compose collections of jointly-trained neural “modules” into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.",
"title": ""
},
{
"docid": "fb0b06eb6238c008bef7d3b2e9a80792",
"text": "An N-dimensional image is divided into “object” and “background” segments using a graph cut approach. A graph is formed by connecting all pairs of neighboring image pixels (voxels) by weighted edges. Certain pixels (voxels) have to be a priori identified as object or background seeds providing necessary clues about the image content. Our objective is to find the cheapest way to cut the edges in the graph so that the object seeds are completely separated from the background seeds. If the edge cost is a decreasing function of the local intensity gradient then the minimum cost cut should produce an object/background segmentation with compact boundaries along the high intensity gradient values in the image. An efficient, globally optimal solution is possible via standard min-cut/max-flow algorithms for graphs with two terminals. We applied this technique to interactively segment organs in various 2D and 3D medical images.",
"title": ""
},
{
"docid": "5d95296e187727b65316bfebc2ea35ff",
"text": "Prior investigations on the beneficial effect of dietary processed tomato products and lycopene on prostate cancer risk suggested that lycopene may require the presence of other constituents to exert its chemopreventive potential. We investigated whether ketosamines, a group of carbohydrate derivatives present in dehydrated tomato products, may interact with lycopene against prostate tumorigenesis. One ketosamine, FruHis, strongly synergized with lycopene against proliferation of the highly metastatic rat prostate adenocarcinoma MAT-LyLu cell line in vitro. The FruHis/lycopene combination significantly inhibited in vivo tumor formation by MAT-LyLu cells in syngeneic Copenhagen rats. Energy-balanced diets, supplemented with tomato paste, tomato powder, or tomato paste plus FruHis, were fed to Wistar-Unilever rats (n = 20 per group) treated with N-nitroso-N-methylurea and testosterone to induce prostate carcinogenesis. Survival from carcinogenesis was lowest in the control group (median survival time, 40 weeks) and highest in the group fed the tomato paste/FruHis diet (51 weeks; P = 0.004, versus control). The proportions of dying rats with macroscopic prostate tumors in the control, tomato paste, tomato powder, and tomato paste/FruHis groups were 63% (12 of 19), 39% (5 of 13), 43% (6 of 14), and 18% (2 of 11), respectively. FruHis completely blocked DNA oxidative degradation at >250 micromol/L in vitro, whereas neither ascorbate nor phenolic antioxidants from tomato were effective protectors in this assay. FruHis, therefore, may exert tumor-preventive effect through its antioxidant activity and interaction with lycopene.",
"title": ""
},
{
"docid": "7b6d68ef91e61a701380bfcb2d859771",
"text": "This review provides an overview of how women adjust emotionally to the various phases of IVF treatment in terms of anxiety, depression or general distress before, during and after different treatment cycles. A systematic scrutiny of the literature yielded 706 articles that paid attention to emotional aspects of IVF treatment of which 27 investigated the women's emotional adjustment with standardized measures in relation to norm or control groups. Most studies involved concurrent comparisons between women in different treatment phases and different types of control groups. The findings indicated that women starting IVF were only slightly different emotionally from the norm groups. Unsuccessful treatment raised the women's levels of negative emotions, which continued after consecutive unsuccessful cycles. In general, most women proved to adjust well to unsuccessful IVF, although a considerable group showed subclinical emotional problems. When IVF resulted in pregnancy, the negative emotions disappeared, indicating that treatment-induced stress is considerably related to threats of failure. The concurrent research reviewed, should now be underpinned by longitudinal studies to provide more information about women's long-term emotional adjustment to unsuccessful IVF and about indicators of risk factors for problematic emotional adjustment after unsuccessful treatment, to foster focused psychological support for women at risk.",
"title": ""
},
{
"docid": "b1fbaaf4684238e61bf9d3706558f9fa",
"text": "Recommender systems increasingly use contextual and demographical data as a basis for recommendations. Users, however, often feel uncomfortable providing such information. In a privacy-minded design of recommenders, users are free to decide for themselves what data they want to disclose about themselves. But this decision is often complex and burdensome, because the consequences of disclosing personal information are uncertain or even unknown. Although a number of researchers have tried to analyze and facilitate such information disclosure decisions, their research results are fragmented, and they often do not hold up well across studies. This article describes a unified approach to privacy decision research that describes the cognitive processes involved in users’ “privacy calculus” in terms of system-related perceptions and experiences that act as mediating factors to information disclosure. The approach is applied in an online experiment with 493 participants using a mock-up of a context-aware recommender system. Analyzing the results with a structural linear model, we demonstrate that personal privacy concerns and disclosure justification messages affect the perception of and experience with a system, which in turn drive information disclosure decisions. Overall, disclosure justification messages do not increase disclosure. Although they are perceived to be valuable, they decrease users’ trust and satisfaction. Another result is that manipulating the order of the requests increases the disclosure of items requested early but decreases the disclosure of items requested later.",
"title": ""
},
{
"docid": "a82480d7f57aae37a6a0faa39ec02634",
"text": "Exclusion is the determination by a latent print examiner that two friction ridge impressions did not originate from the same source. The concept and terminology of exclusion vary among agencies. Much of the literature on latent print examination focuses on individualization, and much less attention has been paid to exclusion. This experimental study assesses the associations between a variety of factors and exclusion determinations. Although erroneous exclusions are more likely to occur on some images and for some examiners, they were widely distributed among images and examiners. Measurable factors found to be associated with exclusion rates include the quality of the latent, value determinations, analysis minutia count, comparison difficulty, and the presence of cores or deltas. An understanding of these associations will help explain the circumstances under which errors are more likely to occur and when determinations are less likely to be reproduced by other examiners; the results should also lead to improved effectiveness and efficiency of training and casework quality assurance. This research is intended to assist examiners in improving the examination process and provide information to the broader community regarding the accuracy, reliability, and implications of exclusion decisions.",
"title": ""
},
{
"docid": "102a9eb7ba9f65a52c6983d74120430e",
"text": "A key aim of social psychology is to understand the psychological processes through which independent variables affect dependent variables in the social domain. This objective has given rise to statistical methods for mediation analysis. In mediation analysis, the significance of the relationship between the independent and dependent variables has been integral in theory testing, being used as a basis to determine (1) whether to proceed with analyses of mediation and (2) whether one or several proposed mediator(s) fully or partially accounts for an effect. Synthesizing past research and offering new arguments, we suggest that the collective evidence raises considerable concern that the focus on the significance between the independent and dependent variables, both before and after mediation tests, is unjustified and can impair theory development and testing. To expand theory involving social psychological processes, we argue that attention in mediation analysis should be shifted towards assessing the magnitude and significance of indirect effects. Understanding the psychological processes by which independent variables affect dependent variables in the social domain has long been of interest to social psychologists. Although moderation approaches can test competing psychological mechanisms (e.g., Petty, 2006; Spencer, Zanna, & Fong, 2005), mediation is typically the standard for testing theories regarding process (e.g., Baron & Kenny, 1986; James & Brett, 1984; Judd & Kenny, 1981; MacKinnon, 2008; MacKinnon, Lockwood, Hoffman, West, & Sheets, 2002; Muller, Judd, & Yzerbyt, 2005; Preacher & Hayes, 2004; Preacher, Rucker, & Hayes, 2007; Shrout & Bolger, 2002). For example, dual process models of persuasion (e.g., Petty & Cacioppo, 1986) often distinguish among competing accounts by measuring the postulated underlying process (e.g., thought favorability, thought confidence) and examining their viability as mediators (Tormala, Briñol, & Petty, 2007). Thus, deciding on appropriate requirements for mediation is vital to theory development. Supporting the high status of mediation analysis in our field, MacKinnon, Fairchild, and Fritz (2007) report that research in social psychology accounts for 34% of all mediation tests in psychology more generally. In our own analysis of journal articles published from 2005 to 2009, we found that approximately 59% of articles in the Journal of Personality and Social Psychology (JPSP) and 65% of articles in Personality and Social Psychology Bulletin (PSPB) included at least one mediation test. Consistent with the observations of MacKinnon et al., we found that the bulk of these analyses continue to follow the causal steps approach outlined by Baron and Kenny (1986). Social and Personality Psychology Compass 5/6 (2011): 359–371, 10.1111/j.1751-9004.2011.00355.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd The current article examines the viability of the causal steps approach in which the significance of the relationship between an independent variable (X) and a dependent variable (Y) are tested both before and after controlling for a mediator (M) in order to examine the validity of a theory specifying mediation. Traditionally, the X fi Y relationship is tested prior to mediation to determine whether there is an effect to mediate, and it is also tested after introducing a potential mediator to determine whether that mediator fully or partially accounts for the effect. At first glance, the requirement of a significant X fi Y association prior to examining mediation seems reasonable. If there is no significant X fi Y relationship, how can there be any mediation of it? Furthermore, the requirement that X fi Y become nonsignificant when controlling for the mediator seems sensible in order to claim ‘full mediation’. What is the point of hypothesizing or testing for additional mediators if the inclusion of one mediator renders the initial relationship indistinguishable from zero? Despite the intuitive appeal of these requirements, the present article raises serious concerns about their use.",
"title": ""
},
{
"docid": "2a10978fdd01c7c19d957fb4224016bf",
"text": "To my parents and my girlfriend. Abstract Techniques of Artificial Intelligence and Human-Computer Interaction have empowered computer music systems with the ability to perform with humans via a wide spectrum of applications. However, musical interaction between humans and machines is still far less musical than the interaction between humans since most systems lack any representation or capability of musical expression. This thesis contributes various techniques, especially machine-learning algorithms, to create artificial musicians that perform expressively and collaboratively with humans. The current system focuses on three aspects of expression in human-computer collaborative performance: 1) expressive timing and dynamics, 2) basic improvisation techniques, and 3) facial and body gestures. Timing and dynamics are the two most fundamental aspects of musical expression and also the main focus of this thesis. We model the expression of different musicians as co-evolving time series. Based on this representation, we develop a set of algorithms, including a sophisticated spectral learning method, to discover regularities of expressive musical interaction from rehearsals. Given a learned model, an artificial performer generates its own musical expression by interacting with a human performer given a pre-defined score. The results show that, with a small number of rehearsals, we can successfully apply machine learning to generate more expressive and human-like collaborative performance than the baseline automatic accompaniment algorithm. This is the first application of spectral learning in the field of music. Besides expressive timing and dynamics, we consider some basic improvisation techniques where musicians have the freedom to interpret pitches and rhythms. We developed a model that trains a different set of parameters for each individual measure and focus on the prediction of the number of chords and the number of notes per chord. Given the model prediction, an improvised score is decoded using nearest-neighbor search, which selects the training example whose parameters are closest to the estimation. Our result shows that our model generates more musical, interactive, and natural collaborative improvisation than a reasonable baseline based on mean estimation. Although not conventionally considered to be \" music, \" body and facial movements are also important aspects of musical expression. We study body and facial expressions using a humanoid saxophonist robot. We contribute the first algorithm to enable a robot to perform an accompaniment for a musician and react to human performance with gestural and facial expression. The current system uses rule-based performance-motion mapping and separates robot motions into three groups: finger motions, …",
"title": ""
},
{
"docid": "b58c1e18a792974f57e9f676c1495826",
"text": "The influence of bilingualism on cognitive test performance in older adults has received limited attention in the neuropsychology literature. The aim of this study was to examine the impact of bilingualism on verbal fluency and repetition tests in older Hispanic bilinguals. Eighty-two right-handed participants (28 men and 54 women) with a mean age of 61.76 years (SD = 9.30; range = 50-84) and a mean educational level of 14.8 years (SD = 3.6; range 2-23) were selected. Forty-five of the participants were English monolinguals, 18 were Spanish monolinguals, and 19 were Spanish-English bilinguals. Verbal fluency was tested by electing a verbal description of a picture and by asking participants to generate words within phonemic and semantic categories. Repetition was tested using a sentence-repetition test. The bilinguals' test scores were compared to English monolinguals' and Spanish monolinguals' test scores. Results demonstrated equal performance of bilingual and monolingual participants in all tests except that of semantic verbal fluency. Bilinguals who learned English before age 12 performed significantly better on the English repetition test and produced a higher number of words in the description of a picture than the bilinguals who learned English after age 12. Variables such as task demands, language interference, linguistic mode, and level of bilingualism are addressed in the Discussion section.",
"title": ""
},
{
"docid": "2a3b10551913946d4550cf6ca3e1a135",
"text": "We estimate flight-level price elasticities using a database of online prices and seat map displays. In contrast to market-level and route-level elasticities reported in the literature, flight-level elasticities can forecast responses in demand due to day-to-day price fluctuations. Knowing how elasticities vary by flight and booking characteristics and in response to competitors’ pricing actions allows airlines to design better promotions. It also allows policy makers the ability to evaluate the impacts of proposed tax increases or time-of-day congestion pricing policies. Our elasticity results show how airlines can design optimal promotions by considering not only which departure dates should be targeted, but also which days of the week customers should be allowed to purchase. Additionally, we show how elasticities can be used by carriers to strategically match a subset of their competitors’ sale fares. Methodologically, we use an approach that corrects for price endogeneity; failure to do so results in biased estimates and incorrect pricing recommendations. Using an instrumental variable approach to address this problem we find a set of valid instruments that can be used in future studies of air travel demand. We conclude by describing how our approach contributes to the literature, by offering an approach to estimate flight-level demand elasticities that the research community needs as an input to more advanced optimization models that integrate demand forecasting, price optimization, and revenue optimization models. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7350ad0ff7c355cb7fcd1178ae4e1edd",
"text": "We report on the ongoing development of a research framework for dynamic integration of information from hard (electronic) and soft (human) sensors. We describe this framework, which includes representation of 2nd order uncertainty. We outline current and planned human-in-the-loop experiments in which an ldquoad hoc community of human observersrdquo provides input reports via mobile phones and PDAs. Our overall approach is based on three pillars: traditional sensing resources (ldquoS-spacerdquo), dynamic communities of human observers (ldquoH-spacerdquo) and resources such as archived sensor data, blogs, reports, dynamic news reports from citizen reporters via the Internet (ldquoI-spacerdquo). The sensors in all three of these pillars need to be characterized and calibrated. In H-space and I-space, calibration issues related to motivation, truthfulness, etc. must be considered in addition to the standard physical characterization and calibration issues that need to be considered in S-space.",
"title": ""
}
] | scidocsrr |
319177607e15c7ae915ac9c0c9221048 | Ontology extraction from MongoDB using formal concept analysis | [
{
"docid": "9afc0411331ac43bc54df639760813af",
"text": "Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.",
"title": ""
},
{
"docid": "6a2544c5c52b08e70e0d0e2696f41017",
"text": "This first textbook on formal concept analysis gives a systematic presentation of the mathematical foundations and their relations to applications in computer science, especially in Before we only the expression is required for basic law. In case fcbo can understand frege's system so. Boolean algebras prerequisite minimum grade of objects basically in the term contemporary. The following facts many are biconditionals such from the name of can solely. But one distinguishes indefinite and c2 then its underlying theory of the comprehension principle. The membership sign here is in those already generated for functions view. Comprehension principle of the point if and attributes let further. A set of height two sentences, are analyzed from the connections between full reconstruction. Frege later in the discussion so, that direction of logic laws governing cardinal number. This theorem by the open only, over concepts. Where is a relation names or, using this means of his attitude among. With our example we define xis an ancestor of mathematics.",
"title": ""
}
] | [
{
"docid": "0886c323b86b4fac8de6217583841318",
"text": "Data Mining is a technique used in various domains to give meaning to the available data Classification is a data mining (machine learning) technique used to predict group membership for data instances. In this paper, we present the basic classification techniques. Several major kinds of classification method including decision tree, Bayesian networks, k-nearest neighbour classifier, Neural Network, Support vector machine. The goal of this paper is to provide a review of different classification techniques in data mining. Keywords— Data mining, classification, Supper vector machine (SVM), K-nearest neighbour (KNN), Decision Tree.",
"title": ""
},
{
"docid": "50ebb851bb0fceeddd39fdee66941e6c",
"text": "Machine learning involves optimizing a loss function on unlabeled data points given examples of labeled data points, where the loss function measures the performance of a learning algorithm. We give an overview of techniques, called reductions, for converting a problem of minimizing one loss function into a problem of minimizing another, simpler loss function. This tutorial discusses how to create robust reductions that perform well in practice. The reductions discussed here can be used to solve any supervised learning problem with a standard binary classification or regression algorithm available in any machine learning toolkit. We also discuss common design flaws in folklore reductions.",
"title": ""
},
{
"docid": "49ca032d3d62eae113fdaa81538151d1",
"text": "Wikipedia articles contain, besides free text, various types of structured information in the form of wiki markup. The type of wiki content that is most valuable for search are Wikipedia infoboxes, which display an article’s most relevant facts as a table of attribute-value pairs on the top right-hand side of the Wikipedia page. Infobox data is not used by Wikipedia’s own search engine. Standard Web search engines like Google or Yahoo also do not take advantage of the data. In this paper, we present Faceted Wikipedia Search, an alternative search interface for Wikipedia, which facilitates infobox data in order to enable users to ask complex questions against Wikipedia knowledge. By allowing users to query Wikipedia like a structured database, Faceted Wikipedia Search helps them to truly exploit Wikipedia’s collective intelligence.",
"title": ""
},
{
"docid": "85c74646e74aaff7121042beaded5bfe",
"text": "We consider the sampling bias introduced in the study of online networks when collecting data through publicly available APIs (application programming interfaces). We assess differences between three samples of Twitter activity; the empirical context is given by political protests taking place in May 2012. We track online communication around these protests for the period of one month, and reconstruct the network of mentions and re-tweets according to the search and the streaming APIs, and to different filraph comparison tering parameters. We find that smaller samples do not offer an accurate picture of peripheral activity; we also find that the bias is greater for the network of mentions, partly because of the higher influence of snowballing in identifying relevant nodes. We discuss the implications of this bias for the study of diffusion dynamics and political communication through social media, and advocate the need for more uniform sampling procedures to study online communication. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d5f77e99ebda2f1419b8dbb56d93c41f",
"text": "We developed a four-arm four-crawler advanced disaster response robot called OCTOPUS. Disaster response robots are expected to be capable of both mobility, e.g., entering narrow spaces over very rough unstable ground, and workability, e.g., conducting complex debris-demolition work. However, conventional disaster response robots are specialized in either mobility or workability. Moreover, strategies to independently enhance the capability of crawlers for mobility and arms for workability will increase the robot size and weight. To balance environmental applicability with the mobility and workability, OCTOPUS is equipped with a mutual complementary strategy between its arms and crawlers. The four arms conduct complex tasks while ensuring stabilization when climbing steps. The four crawlers translate rough terrain while avoiding toppling over when conducting demolition work. OCTOPUS is hydraulic driven and teleoperated by two operators. To evaluate the performance of OCTOPUS, we conducted preliminary experiments involving climbing high steps and removing attached objects by using the four arms. The results showed that OCTOPUS completed the two tasks by adequately coordinating its four arms and four crawlers and improvement in operability needs.",
"title": ""
},
{
"docid": "a6a7f8902b0cce0d837cc94a85b97098",
"text": "Context. : The increasing popularity of JavaScript has lead to a variety of frameworks that aim to help developers to address programming tasks. However, the number of JavaScript Frameworks has risen rapidly to thousands and more. It is difficult for practitioners to identify the frameworks that best fit to their needs and to develop new frameworks that fit such needs. Existing research has focused in proposing software metrics for the frameworks, which do not carry a high value to practitioners. While benchmarks, technical reports, and experts’ opinions are available, they suffer the same issue that they do not carry much value. In particular, there is a lack of knowledge regarding the processes and reasons that drive developers towards the choice. ∗Corresponding author Email addresses: amantia.pano2@unibz.it (Amantia Pano), daniel.graziotin@informatik.uni-stuttgart.de (Daniel Graziotin), pekkaa@ntnu.edu (Pekka Abrahamsson) 1 ar X iv :1 60 5. 04 30 3v 1 [ cs .S E ] 1 3 M ay 2 01 6 Objective. : This paper explores the human aspects of software development behind the decision-making process that leads to a choice of a JavaScript Framework. Method. : We conducted a qualitative interpretive study, following the grounded theory data analysis methodology. We interviewed 18 participants who are decision makers in their companies or entrepreneurs, or are able to motivate the JavaScript Framework decision-making process. Results. : We offer a model of factors that are desirable to be found in a JavaScript Framework and a representation of the decision makers involved in the frameworks selection. The factors are usability (attractiveness, learnability, understandability), cost, efficiency (performance, size), and functionality (automatisation, extensibility, flexibility, isolation, modularity, suitability, updated). These factors are evaluated by a combination of four possible decision makers, which are customer, developer, team, and team leader. Conclusion. : Our model contributes to the body of knowledge related to the decision-making process when selecting a JavaScript framework. As a practical implication, we believe that our model is useful for (1) Web developers and (2) JavaScript framework developers.",
"title": ""
},
{
"docid": "c0a8134a2b815398689eaac7fb9de8e3",
"text": "A forward design process applicable to the specification of flight simulator cueing systems is presented in this paper. This process is based on the analysis of the pilotvehicle control loop by using a pilot model incorporating both visual and vestibular feedback, and the aircraft dynamics. After substituting the model for the simulated aircraft, the analysis tools are used to adjust the washout filter parameters with the goal of restoring pilot control behaviour. This process allows the specification of the motion cueing algorithm. Then, based on flight files representative for the operational flight envelope, the required motion system space is determined. The motion-base geometry is established based on practical limitations, as well as criteria for the stability of the platform with respect to singular conditions. With this process the characteristics of the aircraft, the tasks to be simulated, and the missions themselves are taken into account in defining the simulator motion cueing system.",
"title": ""
},
{
"docid": "cd545436dc62cc32f960a09442242eb2",
"text": "BACKGROUND\nSocial networking services (SNSs) contain abundant information about the feelings, thoughts, interests, and patterns of behavior of adolescents that can be obtained by analyzing SNS postings. An ontology that expresses the shared concepts and their relationships in a specific field could be used as a semantic framework for social media data analytics.\n\n\nOBJECTIVE\nThe aim of this study was to refine an adolescent depression ontology and terminology as a framework for analyzing social media data and to evaluate description logics between classes and the applicability of this ontology to sentiment analysis.\n\n\nMETHODS\nThe domain and scope of the ontology were defined using competency questions. The concepts constituting the ontology and terminology were collected from clinical practice guidelines, the literature, and social media postings on adolescent depression. Class concepts, their hierarchy, and the relationships among class concepts were defined. An internal structure of the ontology was designed using the entity-attribute-value (EAV) triplet data model, and superclasses of the ontology were aligned with the upper ontology. Description logics between classes were evaluated by mapping concepts extracted from the answers to frequently asked questions (FAQs) onto the ontology concepts derived from description logic queries. The applicability of the ontology was validated by examining the representability of 1358 sentiment phrases using the ontology EAV model and conducting sentiment analyses of social media data using ontology class concepts.\n\n\nRESULTS\nWe developed an adolescent depression ontology that comprised 443 classes and 60 relationships among the classes; the terminology comprised 1682 synonyms of the 443 classes. In the description logics test, no error in relationships between classes was found, and about 89% (55/62) of the concepts cited in the answers to FAQs mapped onto the ontology class. Regarding applicability, the EAV triplet models of the ontology class represented about 91.4% of the sentiment phrases included in the sentiment dictionary. In the sentiment analyses, \"academic stresses\" and \"suicide\" contributed negatively to the sentiment of adolescent depression.\n\n\nCONCLUSIONS\nThe ontology and terminology developed in this study provide a semantic foundation for analyzing social media data on adolescent depression. To be useful in social media data analysis, the ontology, especially the terminology, needs to be updated constantly to reflect rapidly changing terms used by adolescents in social media postings. In addition, more attributes and value sets reflecting depression-related sentiments should be added to the ontology.",
"title": ""
},
{
"docid": "0aa666e59fc645f8bbc16483581bf4c4",
"text": "Wide Area Motion Imagery (WAMI) enables the surveillance of tens of square kilometers with one airborne sensor Each image can contain thousands of moving objects. Applications such as driver behavior analysis or traffic monitoring require precise multiple object tracking that is dependent on initial detections. However, low object resolution, dense traffic, and imprecise image alignment lead to split, merged, and missing detections. No systematic evaluation of moving object detection exists so far although many approaches have been presented in the literature. This paper provides a detailed overview of existing methods for moving object detection in WAMI data. Also we propose a novel combination of short-term background subtraction and suppression of image alignment errors by pixel neighborhood consideration. In total, eleven methods are systematically evaluated using more than 160,000 ground truth detections of the WPAFB 2009 dataset. Best performance with respect to precision and recall is achieved by the proposed one.",
"title": ""
},
{
"docid": "b11331341448f108fb1b503ab8ecd7b8",
"text": "Repairing defects of the auricle requires an appreciation of the underlying 3-dimensional framework, the flexible properties of the cartilages, and the healing contractile tendencies of the surrounding soft tissue. In the analysis of auricular defects and planning of their reconstruction, it is helpful to divide the auricle into subunits for which different techniques may offer better functional and aesthetic outcomes. This article reviews many of the reconstructive options for defects of the various auricular subunits.",
"title": ""
},
{
"docid": "a39ce01ebbfc3fa1e90c7eed2aa1b2ef",
"text": "Most scientific articles are available in PDF format. The PDF standard allows the generation of metadata that is included within the document. However, many authors do not define this information, making this feature unreliable or incomplete. This fact has been motivating research which aims to extract metadata automatically. Automatic metadata extraction has been identified as one of the most challenging tasks in document engineering. This work proposes Artic, a method for metadata extraction from scientific papers which employs a two-layer probabilistic framework based on Conditional Random Fields. The first layer aims at identifying the main sections with metadata information, and the second layer finds, for each section, the corresponding metadata. Given a PDF file containing a scientific paper, Artic extracts the title, author names, emails, affiliations, and venue information. We report on experiments using 100 real papers from a variety of publishers. Our results outperformed the state-of-the-art system used as the baseline, achieving a precision of over 99%.",
"title": ""
},
{
"docid": "a3185ee0a3c4ad9a15b52233f46b5e1a",
"text": "Automatic fusion of aerial optical imagery and untextured LiDAR data has been of significant interest for generating photo-realistic 3D urban models in recent years. However, unsupervised, robust registration still remains a challenge. This paper presents a new registration method that does not require priori knowledge such as GPS/INS information. The proposed algorithm is based on feature correspondence between a LiDAR depth map and a depth map from an optical image. Each optical depth map is generated from edge-preserving dense correspondence between the image and another optical image, followed by ground plane estimation and alignment for depth consistency. Our two-pass RANSAC with Maximum Likelihood estimation incorporates 2D-2D and 2D-3D correspondences to yield robust camera pose estimation. Experiments with a LiDAR-optical imagery dataset show promising results, without using initial pose information.",
"title": ""
},
{
"docid": "c25d877f23f874a5ced7548998ec8157",
"text": "The paper presents a Neural Network model for modeling academic profile of students. The proposed model allows prediction of students’ academic performance based on some of their qualitative observations. Classifying and predicting students’ academic performance using arithmetical and statistical techniques may not necessarily offer the best way to evaluate human acquisition of knowledge and skills, but a hybridized fuzzy neural network model successfully handles reasoning with imprecise information, and enables representation of student modeling in the linguistic form the same way the human teachers do. The model is designed, developed and tested in MATLAB and JAVA which considers factors like age, gender, education, past performance, work status, study environment etc. for performance prediction of students. A Fuzzy Probabilistic Neural Network model has been proposed which enables the design of an easy-to-use, personalized student performance prediction component. The results of experiments show that the model outperforms traditional back-propagation neural networks as well as statistical models. It is also found to be a useful tool in predicting the performance of students belonging to any stream. The model may provide dual advantage to the educational institutions; first by helping teachers to amend their teaching methodology based on the level of students thereby improving students’ performances and secondly classifying the likely successful and unsuccessful students.",
"title": ""
},
{
"docid": "a861082476893281800441c46e71d652",
"text": "Current debates on design research, and its relation to other research fields and scientific disciplines, refer back to a fundamental distinction introduced by Herb Simon (Simon, 1996 (1981)): Design and design research do not primarily focus on explaining the world as it is; they share with engineering a fundamental interest in focusing on the world as it could be. In parallel, we observe a growing interest in the science studies to interpret scientific research as a constructive and creative practice (Knorr Cetina, 1999; 2002), organized as experimental systems (Rheinberger, 2001). Design fiction is a new approach, which integrates these two perspectives, in order to develop a method toolbox for design research for a complex world (Bleecker, 2009; Wiedmer & Caviezel, 2009; Grand 2010).",
"title": ""
},
{
"docid": "404acd9265ae921e7454d4348ae45bda",
"text": "Wepresent a bitmap printingmethod and digital workflow usingmulti-material high resolution Additive Manufacturing (AM). Material composition is defined based on voxel resolution and used to fabricate a design object with locally varying material stiffness, aiming to satisfy the design objective. In this workflowvoxel resolution is set by theprinter’s native resolution, eliminating theneed for slicing andpath planning. Controlling geometry and material property variation at the resolution of the printer provides significantly greater control over structure–property–function relationships. To demonstrate the utility of the bitmap printing approach we apply it to the design of a customized prosthetic socket. Pressuresensing elements are concurrently fabricated with the socket, providing possibilities for evaluation of the socket’s fit. The level of control demonstrated in this study cannot be achieved using traditional CAD tools and volume-based AM workflows, implying that new CAD workflows must be developed in order to enable designers to harvest the capabilities of AM. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7f20eba09cddb9d980b6475aa089463f",
"text": "This technical note describes a new baseline for the Natural Questions (Kwiatkowski et al., 2019). Our model is based on BERT (Devlin et al., 2018) and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard†. Code, preprocessed data and pretrained model are available‡.",
"title": ""
},
{
"docid": "9ff59fcccfdfb9c4c6c34322a6bffb9e",
"text": "A growing number of affective computing researches recently developed a computer system that can recognize an emotional state of the human user to establish affective human-computer interactions. Various measures have been used to estimate emotional states, including self-report, startle response, behavioral response, autonomic measurement, and neurophysiologic measurement. Among them, inferring emotional states from electroencephalography (EEG) has received considerable attention as EEG could directly reflect emotional states with relatively low costs and simplicity. Yet, EEG-based emotional state estimation requires well-designed computational methods to extract information from complex and noisy multichannel EEG data. In this paper, we review the computational methods that have been developed to deduct EEG indices of emotion, to extract emotion-related features, or to classify EEG signals into one of many emotional states. We also propose using sequential Bayesian inference to estimate the continuous emotional state in real time. We present current challenges for building an EEG-based emotion recognition system and suggest some future directions.",
"title": ""
},
{
"docid": "b475ddb8c3ff32dfea5f51d054680bc3",
"text": "An increasing price and demand for natural gas has made it possible to explore remote gas fields. Traditional offshore production platforms for natural gas have been exporting the partially processed natural gas to shore, where it is further processed to permit consumption by end-users. Such an approach is possible where the gas field is located within a reasonable distance from shore or from an existing gas pipeline network. However, much of the world’s gas reserves are found in remote offshore fields where transport via a pipeline is not feasible or is uneconomic to install and therefore, to date, has not been possible to explore. The development of floating production platforms and, on the receiving end, regasification platforms, have increased the possibilities to explore these fields and transport the liquefied gas in a more efficient form, i.e. liquefied natural gas (LNG), to the end user who in turn can readily import the gas. Floating production platforms and regasification platforms, collectively referred to as FLNG, imply a blend of technology from land-based LNG industry, offshore oil and gas industry and marine transport technology. Regulations and rules based on experience from these applications could become too conservative or not conservative enough when applied to a FLNG unit. Alignment with rules for conventional LNG carriers would be an advantage since this would increase the transparency and possibility for standardization in the building of floating LNG production vessels. The objective of this study is to identify the risks relevant to FLNG. The risks are compared to conventional LNG carriers and whether or not regulatory alignment possibilities exist. To identify the risks, a risk analysis was performed based on the principles of formal safety assessment methodology. To propose regulatory alignment possibilities, the risks found were also evaluated against the existing rules and regulations of Det Norske Veritas. The conclusion of the study is that the largest risk-contributing factor on an FLNG is the presence of processing, liquefaction or regasification equipment and for an LNG carrier it is collision, grounding and contact accidents. Experience from oil FPSOs could be used in the design of LNG FPSOs, and attention needs to be drawn to the additional requirements due to processing and storage of cryogenic liquid on board. FSRUs may follow either an approach for offshore rules or, if intended to follow a regular docking scheme, follow an approach for ship rules with additional issues addressed in classification notes.",
"title": ""
},
{
"docid": "f337ae68a40c0ae7f6ff76ba11877adf",
"text": "Our fundamental scientific task is to convert observations of particular persons behaving in particular ways in particular situations into assertions that certain kinds of persons will behave in certain kinds of ways in certain kinds of situations, that is, to construct triple typologies or equivalence classes-of persons, of behaviors, and of situations-and to fashion theories of personality that relate these equivalence classes to one another. It is argued that the different approaches to the study of personality are distinguished from one another not by whether they are idiographic or nomothetic but by the strategies they employ for constructing-or ignoring-each of these three types of equivalence classes. The likely attributes of a successful interactional theory of personality-one that would embrace the entire triple typology-are proposed and discussed.",
"title": ""
},
{
"docid": "f77495366909b9713463bebf2b4ff2fc",
"text": "This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 % for average translational error and 0.0143°/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.",
"title": ""
}
] | scidocsrr |
12ad2563791538b48623e362b2392f05 | Game-theoretic Analysis of Computation Offloading for Cloudlet-based Mobile Cloud Computing | [
{
"docid": "0cbd3587fe466a13847e94e29bb11524",
"text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?",
"title": ""
},
{
"docid": "956799f28356850fda78a223a55169bf",
"text": "Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for",
"title": ""
},
{
"docid": "aa18c10c90af93f38c8fca4eff2aab09",
"text": "The unabated flurry of research activities to augment various mobile devices by leveraging heterogeneous cloud resources has created a new research domain called Mobile Cloud Computing (MCC). In the core of such a non-uniform environment, facilitating interoperability, portability, and integration among heterogeneous platforms is nontrivial. Building such facilitators in MCC requires investigations to understand heterogeneity and its challenges over the roots. Although there are many research studies in mobile computing and cloud computing, convergence of these two areas grants further academic efforts towards flourishing MCC. In this paper, we define MCC, explain its major challenges, discuss heterogeneity in convergent computing (i.e. mobile computing and cloud computing) and networking (wired and wireless networks), and divide it into two dimensions, namely vertical and horizontal. Heterogeneity roots are analyzed and taxonomized as hardware, platform, feature, API, and network. Multidimensional heterogeneity in MCC results in application and code fragmentation problems that impede development of cross-platform mobile applications which is mathematically described. The impacts of heterogeneity in MCC are investigated, related opportunities and challenges are identified, and predominant heterogeneity handling approaches like virtualization, middleware, and service oriented architecture (SOA) are discussed. We outline open issues that help in identifying new research directions in MCC.",
"title": ""
}
] | [
{
"docid": "6c14243c49a2d119d768685b59f9548b",
"text": "Over the past decade, researchers have shown significant advances in the area of radio frequency identification (RFID) and metamaterials. RFID is being applied to a wide spectrum of industries and metamaterial-based antennas are beginning to perform just as well as existing larger printed antennas. This paper presents two novel metamaterial-based antennas for passive ultra-high frequency (UHF) RFID tags. It is shown that by implementing omega-like elements and split-ring resonators into the design of an antenna for an UHF RFID tag, the overall size of the antenna can be significantly reduced to dimensions of less than 0.15λ0, while preserving the performance of the antenna.",
"title": ""
},
{
"docid": "03280447faf00c523b099d4bdbbfe7a5",
"text": "Ostrzenski’s G-pot anatomical structure discovery has been verified by the anatomy, histology, MRI in vivo, and electrovaginography in vivo studies. The objectives of this scientific-clinical investigation were to develop a new surgical reconstructive intervention (G-spotplasty); to determine the ability of G-spotplasty surgical implementation; to observe for potential complications; and to gather initial information on whether G-spotplasty improves female sexual activity, sexual behaviors, and sexual concerns. A case series study was designed and conducted with 5-year follow-up (October 2013 and October 2017). The rehearsal of new G-spotplasty was performed on fresh female cadavers. Three consecutive live women constituted this clinical study population, and they were subjected to the newly developed G-spotplasty procedure in October 2013. Preoperatively and postoperatively, a validated, self-completion instrument of Sexual Relationships and Activities Questionnaire (SRA-Q) was used to measure female sexual activity, sexual behaviors, and sexual concerns. Three out of twelve women met inclusion criteria and were incorporated into this study. All patients were subjected to G-spotplasty, completed 5-year follow-up, and returned completed SRA-Q in a sealed envelope. New G-spotplasty was successfully implemented without surgical difficulty and without complications. All patients reported re-establishing vaginal orgasms with different degrees of difficulties, observing return of anterior vaginal wall engorgement, and were very pleased with the outcome of G-spotplasty. The G-spotplasty is a simple surgical intervention, easy to implement, and improves sexual activities, sexual behaviors, and sexual concerns. The preliminary results are very promising and paved the way for additional clinical-scientific research. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.",
"title": ""
},
{
"docid": "349a5c840daa587aa5d42c6e584e2103",
"text": "We propose a class of functional dependencies for graphs, referred to as GFDs. GFDs capture both attribute-value dependencies and topological structures of entities, and subsume conditional functional dependencies (CFDs) as a special case. We show that the satisfiability and implication problems for GFDs are coNP-complete and NP-complete, respectively, no worse than their CFD counterparts. We also show that the validation problem for GFDs is coNP-complete. Despite the intractability, we develop parallel scalable algorithms for catching violations of GFDs in large-scale graphs. Using real-life and synthetic data, we experimentally verify that GFDs provide an effective approach to detecting inconsistencies in knowledge and social graphs.",
"title": ""
},
{
"docid": "15f2f4ba8635366e5f2879d085511f46",
"text": "Vessel segmentation is a key step for various medical applications, it is widely used in monitoring the disease progression, and evaluation of various ophthalmologic diseases. However, manual vessel segmentation by trained specialists is a repetitive and time-consuming task. In the last two decades, many approaches have been introduced to segment the retinal vessels automatically. With the more recent advances in the field of neural networks and deep learning, multiple methods have been implemented with focus on the segmentation and delineation of the blood vessels. Deep Learning methods, such as the Convolutional Neural Networks (CNN), have recently become one of the new trends in the Computer Vision area. Their ability to find strong spatially local correlations in the data at different abstraction levels allows them to learn a set of filters that are useful to correctly segment the data, when given a labeled training set. In this dissertation, different approaches based on deep learning techniques for the segmentation of retinal blood vessels are studied. Furthermore, in this dissertation are also studied and evaluated the different techniques that have been used for vessel segmentation, based on machine learning (Random Forests and Support vector machine algorithms), and how these can be combined with the deep learning approaches.",
"title": ""
},
{
"docid": "8dc2f16d4f4ed1aa0acf6a6dca0ccc06",
"text": "This is the second paper in a four-part series detailing the relative merits of the treatment strategies, clinical techniques and dental materials for the restoration of health, function and aesthetics for the dentition. In this paper the management of wear in the anterior dentition is discussed, using three case studies as illustration.",
"title": ""
},
{
"docid": "05610fd0e6373291bdb4bc28cf1c691b",
"text": "In this work, we acknowledge the need for software engineers to devise specialized tools and techniques for blockchain-oriented software development. Ensuring effective testing activities, enhancing collaboration in large teams, and facilitating the development of smart contracts all appear as key factors in the future of blockchain-oriented software development.",
"title": ""
},
{
"docid": "26cc29177040461634929eb1fa13395d",
"text": "In this paper, we first characterize distributed real-time systems by the following two properties that have to be supported: best eflorl and leas2 suffering. Then, we propose a distributed real-time object model DRO which complies these properties. Based on the DRO model, we design an object oriented programming language DROL: an extension of C++ with the capa.bility of describing distributed real-time systems. The most eminent feature of DROL is that users can describe on meta level the semantics of message communications as a communication protocol with sending and receiving primitives. With this feature, we can construct a flexible distributed real-time system satisfying specifications which include timing constraints. We implement a runtime system of DROL on the ARTS kernel, and evaluate the efficiency of the prototype implementation as well as confirm the high expressive power of the language.",
"title": ""
},
{
"docid": "f76b587a1bc282a98cf8e42bdd6f5032",
"text": "Ensemble-based methods are among the most widely used techniques for data stream classification. Their popularity is attributable to their good performance in comparison to strong single learners while being relatively easy to deploy in real-world applications. Ensemble algorithms are especially useful for data stream learning as they can be integrated with drift detection algorithms and incorporate dynamic updates, such as selective removal or addition of classifiers. This work proposes a taxonomy for data stream ensemble learning as derived from reviewing over 60 algorithms. Important aspects such as combination, diversity, and dynamic updates, are thoroughly discussed. Additional contributions include a listing of popular open-source tools and a discussion about current data stream research challenges and how they relate to ensemble learning (big data streams, concept evolution, feature drifts, temporal dependencies, and others).",
"title": ""
},
{
"docid": "5473962c6c270df695b965cbcc567369",
"text": "Medical professionals need a reliable prediction methodology to diagnose cancer and distinguish between the different stages in cancer. Classification is a data mining function that assigns items in a collection to target groups or classes. C4.5 classification algorithm has been applied to SEER breast cancer dataset to classify patients into either “Carcinoma in situ” (beginning or pre-cancer stage) or “Malignant potential” group. Pre-processing techniques have been applied to prepare the raw dataset and identify the relevant attributes for classification. Random test samples have been selected from the pre-processed data to obtain classification rules. The rule set obtained was tested with the remaining data. The results are presented and discussed. Keywords— Breast Cancer Diagnosis, Classification, Clinical Data, SEER Dataset, C4.5 Algorithm",
"title": ""
},
{
"docid": "0250d6bb0bcf11ca8af6c2661c1f7f57",
"text": "Chemoreception is a biological process essential for the survival of animals, as it allows the recognition of important volatile cues for the detection of food, egg-laying substrates, mates, or predators, among other purposes. Furthermore, its role in pheromone detection may contribute to evolutionary processes, such as reproductive isolation and speciation. This key role in several vital biological processes makes chemoreception a particularly interesting system for studying the role of natural selection in molecular adaptation. Two major gene families are involved in the perireceptor events of the chemosensory system: the odorant-binding protein (OBP) and chemosensory protein (CSP) families. Here, we have conducted an exhaustive comparative genomic analysis of these gene families in 20 Arthropoda species. We show that the evolution of the OBP and CSP gene families is highly dynamic, with a high number of gains and losses of genes, pseudogenes, and independent origins of subfamilies. Taken together, our data clearly support the birth-and-death model for the evolution of these gene families with an overall high gene turnover rate. Moreover, we show that the genome organization of the two families is significantly more clustered than expected by chance and, more important, that this pattern appears to be actively maintained across the Drosophila phylogeny. Finally, we suggest the homologous nature of the OBP and CSP gene families, dating back their most recent common ancestor after the terrestrialization of Arthropoda (380--450 Ma) and we propose a scenario for the origin and diversification of these families.",
"title": ""
},
{
"docid": "0321ef8aeb0458770cd2efc35615e11c",
"text": "Entity-relationship-structured data is becoming more important on the Web. For example, large knowledge bases have been automatically constructed by information extraction from Wikipedia and other Web sources. Entities and relationships can be represented by subject-property-object triples in the RDF model, and can then be precisely searched by structured query languages like SPARQL. Because of their Boolean-match semantics, such queries often return too few or even no results. To improve recall, it is thus desirable to support users by automatically relaxing or reformulating queries in such a way that the intention of the original user query is preserved while returning a sufficient number of ranked results. In this paper we describe comprehensive methods to relax SPARQL-like triplepattern queries in a fully automated manner. Our framework produces a set of relaxations by means of statistical language models for structured RDF data and queries. The query processing algorithms merge the results of different relaxations into a unified result list, with ranking based on any ranking function for structured queries over RDF-data. Our experimental evaluation, with two different datasets about movies and books, shows the effectiveness of the automatically generated relaxations and the improved quality of query results based on assessments collected on the Amazon Mechanical Turk platform.",
"title": ""
},
{
"docid": "e576b8677816ec54c7dcf52e633e6c9f",
"text": "OBJECTIVE\nThe objective of this study was to determine the level of knowledge, comfort, and training related to the medical management of child abuse among pediatrics, emergency medicine, and family medicine residents.\n\n\nMETHODS\nSurveys were administered to program directors and third-year residents at 67 residency programs. The resident survey included a 24-item quiz to assess knowledge regarding the medical management of physical and sexual child abuse. Sites were solicited from members of a network of child abuse physicians practicing at institutions with residency programs.\n\n\nRESULTS\nAnalyzable surveys were received from 53 program directors and 462 residents. Compared with emergency medicine and family medicine programs, pediatric programs were significantly larger and more likely to have a medical provider specializing in child abuse pediatrics, have faculty primarily responsible for child abuse training, use a written curriculum for child abuse training, and offer an elective rotation in child abuse. Exposure to child abuse training and abused patients was highest for pediatric residents and lowest for family medicine residents. Comfort with managing child abuse cases was lowest among family medicine residents. On the knowledge quiz, pediatric residents significantly outperformed emergency medicine and family medicine residents. Residents with high knowledge scores were significantly more likely to come from larger programs and programs that had a center, provider, or interdisciplinary team that specialized in child abuse pediatrics; had a physician on faculty responsible for child abuse training; used a written curriculum for child abuse training; and had a required rotation in child abuse pediatrics.\n\n\nCONCLUSIONS\nBy analyzing the relationship between program characteristics and residents' child abuse knowledge, we found that pediatric programs provide far more training and resources for child abuse education than emergency medicine and family medicine programs. As leaders, pediatricians must establish the importance of this topic in the pediatric education of residents of all specialties.",
"title": ""
},
{
"docid": "7ccd75f1626966b4ffb22f2788d64fdc",
"text": "Diabetes has affected over 246 million people worldwide with a majority of them being women. According to the WHO report, by 2025 this number is expected to rise to over 380 million. The disease has been named the fifth deadliest disease in the United States with no imminent cure in sight. With the rise of information technology and its continued advent into the medical and healthcare sector, the cases of diabetes as well as their symptoms are well documented. This paper aims at finding solutions to diagnose the disease by analyzing the patterns found in the data through classification analysis by employing Decision Tree and Naïve Bayes algorithms. The research hopes to propose a quicker and more efficient technique of diagnosing the disease, leading to timely treatment of the patients.",
"title": ""
},
{
"docid": "104fa95b500df05a052a230e80797f59",
"text": "Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates.",
"title": ""
},
{
"docid": "fdc4d23fa336ca122fdfb12818901180",
"text": "Concept of communication systems, which use smart antennas is based on digital signal processing algorithms. Thus, the smart antennas system becomes capable to locate and track signals by the both: users and interferers and dynamically adapts the antenna pattern to enhance the reception in Signal-Of-Interest direction and minimizing interference in Signal-Of-Not-Interest direction. Hence, Space Division Multiple Access system, which uses smart antennas, is being used more often in wireless communications, because it shows improvement in channel capacity and co-channel interference. However, performance of smart antenna system greatly depends on efficiency of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This paper investigates performance of the DOA algorithms like MUSIC, ESPRIT and ROOT MUSIC on the uniform linear array in the presence of white noise. The simulation results show that MUSIC algorithm is the best. The resolution of the DOA techniques improves as number of snapshots, number of array elements and signalto-noise ratio increases.",
"title": ""
},
{
"docid": "a361214a42392cbd0ba3e0775d32c839",
"text": "We propose a design methodology to exploit adaptive nanodevices (memristors), virtually immune to their variability. Memristors are used as synapses in a spiking neural network performing unsupervised learning. The memristors learn through an adaptation of spike timing dependent plasticity. Neurons' threshold is adjusted following a homeostasis-type rule. System level simulations on a textbook case show that performance can compare with traditional supervised networks of similar complexity. They also show the system can retain functionality with extreme variations of various memristors' parameters, thanks to the robustness of the scheme, its unsupervised nature, and the power of homeostasis. Additionally the network can adjust to stimuli presented with different coding schemes.",
"title": ""
},
{
"docid": "71b5708fb9d078b370689cac22a66013",
"text": "This paper presents a model, synthesized from the literature, of factors that explain how business analytics contributes to business value. It also reports results from a preliminary test of that model. The model consists of two parts: a process and a variance model. The process model depicts the analyze-insight-decision-action process through which use of an organization’s business-analytic capabilities create business value. The variance model proposes that the five factors in Davenport et al.’s (2010) DELTA model of BA success factors, six from Watson and Wixom (2007), and three from Seddon et al.’s (2010) model of organizational benefits from enterprise systems, assist a firm to gain business value from business analytics. A preliminary test of the model was conducted using data from 100 customer-success stories from vendors such as IBM, SAP, and Teradata. Our conclusion is that the model is likely to be a useful basis for future research.",
"title": ""
},
{
"docid": "7cfc2866218223ba6bd56eb1f10ce29f",
"text": "This paper deals with prediction of anopheles number, the main vector of malaria risk, using environmental and climate variables. The variables selection is based on an automatic machine learning method using regression trees, and random forests combined with stratified two levels cross validation. The minimum threshold of variables importance is accessed using the quadratic distance of variables importance while the optimal subset of selected variables is used to perform predictions. Finally the results revealed to be qualitatively better, at the selection, the prediction, and the CPU time point of view than those obtained by GLM-Lasso method.",
"title": ""
},
{
"docid": "577841609abb10a978ed54429f057def",
"text": "Smart environments integrates various types of technologies, including cloud computing, fog computing, and the IoT paradigm. In such environments, it is essential to organize and manage efficiently the broad and complex set of heterogeneous resources. For this reason, resources classification and categorization becomes a vital issue in the control system. In this paper we make an exhaustive literature survey about the various computing systems and architectures which defines any type of ontology in the context of smart environments, considering both, authors that explicitly propose resources categorization and authors that implicitly propose some resources classification as part of their system architecture. As part of this research survey, we have built a table that summarizes all research works considered, and which provides a compact and graphical snapshot of the current classification trends. The goal and primary motivation of this literature survey has been to understand the current state of the art and identify the gaps between the different computing paradigms involved in smart environment scenarios. As a result, we have found that it is essential to consider together several computing paradigms and technologies, and that there is not, yet, any research work that integrates a merged resources classification, taxonomy or ontology required in such heterogeneous scenarios.",
"title": ""
},
{
"docid": "6a240e0f0944117cf17f4ec1e613d94a",
"text": "This paper presents a simple method for “do as I do\" motion transfer: given a source video of a person dancing we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We pose this problem as a per-frame image-to-image translation with spatio-temporal smoothing. Using pose detections as an intermediate representation between source and target, we learn a mapping from pose images to a target subject’s appearance. We adapt this setup for temporally coherent video generation including realistic face synthesis. Our video demo can be found at https://youtu.be/PCBTZh41Ris.",
"title": ""
}
] | scidocsrr |
1a59614b35106dc234cd7658289fcec5 | Anchor Free Network for Multi-Scale Face Detection | [
{
"docid": "07ff0274408e9ba5d6cd2b1a2cb7cbf8",
"text": "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99% of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82% while prior art ranges from 29-64%).",
"title": ""
},
{
"docid": "c7c5a4d0aacd62aa4e46a6426c8813c6",
"text": "Recent CNN based object detectors, no matter one-stage methods like YOLO [1,2], SSD [3], and RetinaNet [4] or two-stage detectors like Faster R-CNN [5], R-FCN [6] and FPN [7] are usually trying to directly finetune from ImageNet pre-trained models designed for image classification. There has been little work discussing on the backbone feature extractor specifically designed for the object detection. More importantly, there are several differences between the tasks of image classification and object detection. (i) Recent object detectors like FPN and RetinaNet usually involve extra stages against the task of image classification to handle the objects with various scales. (ii) Object detection not only needs to recognize the category of the object instances but also spatially locate the position. Large downsampling factor brings large valid receptive field, which is good for image classification but compromises the object location ability. Due to the gap between the image classification and object detection, we propose DetNet in this paper, which is a novel backbone network specifically designed for object detection. Moreover, DetNet includes the extra stages against traditional backbone network for image classification, while maintains high spatial resolution in deeper layers. Without any bells and whistles, state-of-the-art results have been obtained for both object detection and instance segmentation on the MSCOCO benchmark based on our DetNet (4.8G FLOPs) backbone. The code will be released for the reproduction.",
"title": ""
}
] | [
{
"docid": "30e9afa44756fa1b050945e9f3e1863e",
"text": "A 8-year-old Chinese boy with generalized pustular psoriasis (GPP) refractory to cyclosporine and methylprednisolone was treated successfully with two infusions of infliximab 3.3 mg/kg. He remained in remission for 21 months. Direct sequencing of IL36RN gene showed a homozygous mutation, c.115 + 6T>C. Juvenile GPP is a rare severe form of psoriasis occasionally associated with life-threatening complications. Like acitretin, cyclosporine and methotrexate, infliximab has been reported to be effective for juvenile GPP in case reports. However, there is a lack of data in the optimal treatment course of infliximab for juvenile GPP. Prolonged administration of these medications may cause toxic or fatal complications. We suggest that short-term infliximab regimen should be recommended as a choice for acute juvenile GPP refractory to traditional systemic therapies. WBC count and CRP are sensitive parameters to reflect the disease activity and evaluate the effectiveness of treatment. Monitoring CD4 T lymphocyte count, preventing and correcting CD4 lymphocytopenia are important in the treatment course of juvenile GPP.",
"title": ""
},
{
"docid": "fe8f4f987a28d3e7bff01db3263a740b",
"text": "BACKGROUND\nPeople whose chronic pain limits their independence are especially likely to become anxious and depressed. Mindfulness training has shown promise for stress-related disorders.\n\n\nMETHODS\nChronic pain patients who complained of anxiety and depression and who scored higher than moderate in Hamilton Depression Rating Scale (HDRS) and Hospital Anxiety and Depression Scale (HADS) as well as moderate in Quality of Life Scale (QOLS) were observed for eight weeks, three days a week for an hour of Mindfulness Meditation training with an hour daily home Mindfulness Meditation practice. Pain was evaluated on study entry and completion, and patients were given the Patients' Global Impression of Change (PGIC) to score at the end of the training program.\n\n\nRESULTS\nForty-seven patients (47) completed the Mindfulness Meditation Training program. Over the year-long observation, patients demonstrated noticeable improvement in depression, anxiety, pain, and global impression of change.\n\n\nCONCLUSION\nChronic pain patients who suffer with anxiety and depression may benefit from incorporating Mindfulness Meditation into their treatment plans.",
"title": ""
},
{
"docid": "889b4dabf8d9e9dbc6e3ae9e6dd9759f",
"text": "Neuroscience is undergoing faster changes than ever before. Over 100 years our field qualitatively described and invasively manipulated single or few organisms to gain anatomical, physiological, and pharmacological insights. In the last 10 years neuroscience spawned quantitative datasets of unprecedented breadth (e.g., microanatomy, synaptic connections, and optogenetic brain-behavior assays) and size (e.g., cognition, brain imaging, and genetics). While growing data availability and information granularity have been amply discussed, we direct attention to a less explored question: How will the unprecedented data richness shape data analysis practices? Statistical reasoning is becoming more important to distill neurobiological knowledge from healthy and pathological brain measurements. We argue that large-scale data analysis will use more statistical models that are non-parametric, generative, and mixing frequentist and Bayesian aspects, while supplementing classical hypothesis testing with out-of-sample predictions.",
"title": ""
},
{
"docid": "33b8417f25b56e5ea9944f9f33fc162c",
"text": "Researchers have attempted to model information diffusion and topic trends and lifecycle on online social networks. They have investigated the role of content, social connections and communities, familiarity and behavioral similarity in this context. The current article presents a survey of representative models that perform topic analysis, capture information diffusion, and explore the properties of social connections in the context of online social networks. The article concludes with a set of outlines of open problems and possible directions of future research interest. This article is intended for researchers to identify the current literature, and explore possibilities to improve the art.",
"title": ""
},
{
"docid": "fbfd3294cfe070ac432bf087fc382b18",
"text": "The alignment of business and information technology (IT) strategies is an important and enduring theoretical challenge for the information systems discipline, remaining a top issue in practice over the past 20 years. Multi-business organizations (MBOs) present a particular alignment challenge because business strategies are developed at the corporate level, within individual strategic business units and across the corporate investment cycle. In contrast, the extant literature implicitly assumes that IT strategy is aligned with a single business strategy at a single point in time. This paper draws on resource-based theory and path dependence to model functional, structural, and temporal IT strategic alignment in MBOs. Drawing on Makadok’s theory of profit, we show how each form of alignment creates value through the three strategic drivers of competence, governance, and flexibility, respectively. We illustrate the model with examples from a case study on the Commonwealth Bank of Australia. We also explore the model’s implications for existing IT alignment models, providing alternative theoretical explanations for how IT alignment creates value. Journal of Information Technology (2015) 30, 101–118. doi:10.1057/jit.2015.1; published online 24 March 2015",
"title": ""
},
{
"docid": "03670d2f33c8e6ca2351a21cb003c7a9",
"text": "BACKGROUND\nGigantomastia is a rare and dangerous condition in pregnancy. Although improvement after delivery is likely, postpartum aggravation is possible. To date, various pharmacological approaches have been tried, with only marginal effectiveness. Surgical intervention is often necessary.\n\n\nCASE\nA young woman presented at 32 weeks' gestation with mirror syndrome and gigantomastia. Two years earlier she had had reduction mammoplasty by free nipple transplant. She delivered by cesarean. Rapid postpartum progression of gigantomastia led to breast necrosis and sepsis. The clinical course was complicated by acute respiratory distress syndrome and renal failure. Emergent bilateral simple mastectomy was performed, with subsequent clinical improvement.\n\n\nCONCLUSION\nWhen this devastating condition occurs in pregnancy or postpartum, urgent surgical intervention may prevent potentially fatal complications.",
"title": ""
},
{
"docid": "f3b0bace6028b3d607618e2e53294704",
"text": "State-of-the art spoken language understanding models that automatically capture user intents in human to machine dialogs are trained with manually annotated data, which is cumbersome and time-consuming to prepare. For bootstrapping the learning algorithm that detects relations in natural language queries to a conversational system, one can rely on publicly available knowledge graphs, such as Freebase, and mine corresponding data from the web. In this paper, we present an unsupervised approach to discover new user intents using a novel Bayesian hierarchical graphical model. Our model employs search query click logs to enrich the information extracted from bootstrapped models. We use the clicked URLs as implicit supervision and extend the knowledge graph based on the relational information discovered from this model. The posteriors from the graphical model relate the newly discovered intents with the search queries. These queries are then used as additional training examples to complement the bootstrapped relation detection models. The experimental results demonstrate the effectiveness of this approach, showing extended coverage to new intents without impacting the known intents.",
"title": ""
},
{
"docid": "4d147b58340571f4254f7c2190b383b9",
"text": "We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 x FLOPs reduction and 16.63× compression on VGG-16, with only 0.52% top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1% top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.",
"title": ""
},
{
"docid": "6f6ae8ea9237cca449b8053ff5f368e7",
"text": "With the rapid development of Location-based Social Network (LBSN) services, a large number of Point-of-Interests (POIs) have been available, which consequently raises a great demand of building personalized POI recommender systems. A personalized POI recommender system can significantly help users to find their preferred POIs and assist POI owners to attract more customers. However, due to the complexity of users’ checkin decision making process that is influenced by many different factors such as POI distance and region’s prosperity, and the dynamics of user’s preference, POI recommender systems usually suffer from many challenges. Although different latent factor based methods (e.g., probabilistic matrix factorization) have been proposed, most of them do not successfully incorporate both geographical influence and temporal effect together into latent factor models. To this end, in this paper, we propose a new Spatial-Temporal Probabilistic Matrix Factorization (STPMF) model that models a user’s preference for POI as the combination of his geographical preference and other general interest in POI. Furthermore, in addition to static general interest of user, we capture the temporal dynamics of user’s interest as well by modeling checkin data in a unique way. To evaluate the proposed STPMF model, we conduct extensive experiments with many state-of-the-art baseline methods and evaluation metrics on two real-world data sets. The experimental results clearly demonstrate the effectiveness of our proposed STPMF model.",
"title": ""
},
{
"docid": "85a09871ca341ca5f70a78b2df8fdc02",
"text": "This paper presents a multi-channel frequency-modulated continuous-wave (FMCW) radar sensor operating in the frequency range from 91 to 97 GHz. The millimeter-wave radar sensor utilizes an SiGe chipset comprising a single signal-generation chip and multiple monostatic transceiver (TRX) chips, which are based on a 200-GHz fT HBT technology. The front end is built on an RF soft substrate in chip-on-board technology and employs a nonuniformly distributed antenna array to improve the angular resolution. The synthesis of ten virtual antennas achieved by a multiple-input multiple-output technique allows the virtual array aperture to be maximized. The fundamental-wave voltage-controlled oscillator achieves a single-sideband phase noise of -88 dBc/Hz at 1-MHz offset frequency. The TX provides a saturated output power of 6.5 dBm, and the mixer within the TRX achieves a gain and a double sideband noise figure of 11.5 and 12 dB, respectively. Possible applications include radar sensing for range and angle detection, material characterization, and imaging.",
"title": ""
},
{
"docid": "b867d81593998fb13359b19f52e3923e",
"text": "VoIP (Voice over IP) is a modern service with enormous potential for yet further growth. It uses the already available and universally implemented IP transport platform. One significant problem, however, is ensuring the Quality of Service, abbreviated QoS. This paper addresses exactly that issue. In an extensive investigation the influence of jitter buffers on QoS is being examined in depth. Two implementations, namely a passive FIFO buffer and an active PJSIP buffer are considered. The results obtained are presented in several diagrams and interpreted. They provide valuable insights and indications as to how procedures to ensure QoS in IP networks can be planned and implemented. The paper concludes with a summary and outlook on further work.",
"title": ""
},
{
"docid": "947f17970a81ebc4e8c780b1291aa474",
"text": "Minimally invasive total hip arthroplasty (THA) is claimed to be superior to the standard technique, due to the potential reduction of soft tissue damage via a smaller and tissue-sparing approach. As a result of the lack of objective evidence of fewer muscle and tendon defects, controversy still remains as to whether minimally invasive total hip arthroplasty truly minimizes muscle and tendon damage. Therefore, the objective was to compare the influence of the surgical approach on abductor muscle trauma and to analyze the relevance to postoperative pain and functional recovery. Between June 2006 and July 2007, 44 patients with primary hip arthritis were prospectively included in the study protocol. Patients underwent cementless unilateral total hip arthroplasty either through a minimally invasive anterolateral approach (ALMI) (n = 21) or a modified direct lateral approach (mDL) (n = 16). Patients were evaluated clinically and underwent MR imaging preoperatively and at 3 and 12 months postoperatively. Clinical assessment contained clinical examination, performance of abduction test and the survey of a function score using the Harris Hip Score, a pain score using a numeric rating scale (NRS) of 0–10, as well as a satisfaction score using an NRS of 1–6. Additionally, myoglobin and creatine kinase were measured preoperatively, and 6, 24 and 96 h postoperatively. Evaluation of the MRI images included fatty atrophy (rating scale 0–4), tendon defects (present/absent) and bursal fluid collection of the abductor muscle. Muscle and tendon damage occurred in both groups, but more lateral gluteus medius tendon defects [mDL 3/12mth.: 6 (37%)/4 (25%); ALMI: 3 (14%)/2 (9%)] and muscle atrophy in the anterior part of the gluteus medius [mean-standard (12): 1.75 ± 1.8; mean-MIS (12): 0.98 ± 1.1] were found in patients with the mDL approach. The clinical outcome was also poorer compared to the ALMI group. Significantly, more Trendelenburg’s signs were evident and lower clinical scores were achieved in the mDL group. No differences in muscle and tendon damage were found for the gluteus minimus muscle. A higher serum myoglobin concentration was measured 6 and 24 h postoperatively in the mDL group (6 h: 403 ± 168 μg/l; 24 h: 304 ± 182 μg/l) compared to the ALMI group (6 h: 331 ± 143 μg/l; 24 h: 268 ± 145 μg/l). Abductor muscle and tendon damage occurred in both approaches, but the gluteus medius muscle can be spared more successfully via the minimally invasive approach and is accompanied by a better clinical outcome. Therefore, going through the intermuscular plane, without any detachment or dissection of muscle and tendons, truly minimizes perioperative soft tissue trauma. Furthermore, MRI emerges as an important imaging modality in the evaluation of muscle trauma in THA.",
"title": ""
},
{
"docid": "253b2696bb52f43528f02e85d1070e96",
"text": "Prosocial behavior consists of behaviors regarded as beneficial to others, including helping, sharing, comforting, guiding, rescuing, and defending others. Although women and men are similar in engaging in extensive prosocial behavior, they are different in their emphasis on particular classes of these behaviors. The specialty of women is prosocial behaviors that are more communal and relational, and that of men is behaviors that are more agentic and collectively oriented as well as strength intensive. These sex differences, which appear in research in various settings, match widely shared gender role beliefs. The origins of these beliefs lie in the division of labor, which reflects a biosocial interaction between male and female physical attributes and the social structure. The effects of gender roles on behavior are mediated by hormonal processes, social expectations, and individual dispositions.",
"title": ""
},
{
"docid": "949d240eae2357f29892ae8b25901c2f",
"text": "We present a convolutional approach to reflection symmetry detection in 2D. Our model, built on the products of complex-valued wavelet convolutions, simplifies previous edgebased pairwise methods. Being parameter-centered, as opposed to feature-centered, it has certain computational advantages when the object sizes are known a priori, as demonstrated in an ellipse detection application. The method outperforms the best-performing algorithm on the CVPR 2013 Symmetry Detection Competition Database in the single-symmetry case. Code and a new database for 2D symmetry detection is available.",
"title": ""
},
{
"docid": "4cd396d87c8e30c6949b5f2bdb71806d",
"text": "Recently, the market of the PV power plant is growing up in the Asian market. In the PV power plant, typically, an inverter which has the rated power of few hundreds kVA is applied to feed the power to the grid. A 1-MW solar power inverter which employs all SiC Power Modules has been developed. The developed solar power inverter consists of two conversion stages, first stage is a boost converter and second stage is a T-type NPC inverter. A chopper module in the boost converter is configured with SiC-based MOSFETs and Schottky Barrier Diodes, and 48 chopper modules are used in parallel. Each chopper module is controlled individually. The T-type NPC inverter stage is configured with Si-based IGBTs and RB-IGBTs. In this paper, the circuit configurations of the developed solar power inverter, employed SiC-based power devices, and the control scheme are described in detail. In the end, the total efficiencies for the minimum, nominal, and maximum DC voltages are experimentally measured. The measured efficiency at the rated output power varies from 98 % to 98.6 % depending on the values of DC input voltage. The maximum efficiency of 98.8 % is achieved in the case of maximum DC input voltage.",
"title": ""
},
{
"docid": "835fd7a4410590a3d848222eb3159aeb",
"text": "Modularity in organizations can facilitate the creation and development of dynamic capabilities. Paradoxically, however, modular management can also stifle the strategic potential of such capabilities by conflicting with the horizontal integration of units. We address these issues through an examination of how modular management of information technology (IT), project teams and front-line personnel in concert with knowledge management (KM) interventions influence the creation and development of dynamic capabilities at a large Asia-based call center. Our findings suggest that a full capitalization of the efficiencies created by modularity may be closely linked to the strategic sense making abilities of senior managers to assess the long-term business value of the dominant designs available in the market. Drawing on our analysis we build a modular management-KM-dynamic capabilities model, which highlights the evolution of three different levels of dynamic capabilities and also suggests an inherent complementarity between modular and integrated approaches. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c18cec45829e4aec057443b9da0eeee5",
"text": "This paper presents a synthesis on the application of fuzzy integral as an innovative tool for criteria aggregation in decision problems. The main point is that fuzzy integrals are able to model interaction between criteria in a flexible way. The methodology has been elaborated mainly in Japan, and has been applied there successfully in various fields such as design, reliability, evaluation of goods, etc. It seems however that this technique is still very little known in Europe. It is one of the aim of this review to disseminate this emerging technology in many industrial fields.",
"title": ""
},
{
"docid": "54afd49e0853e258916e2a36605177f0",
"text": "Novolac type liquefied wood/phenol/formaldehyde (LWPF) resins were synthesized from liquefied wood and formaldehyde. The average molecular weight of the LWPF resin made from the liquefied wood reacted in an atmospheric three neck flask increased with increasing P/W ratio. However, it decreased with increasing phenol/wood ratio when using a sealed Parr reactor. On average, the LWPF resin made from the liquefied wood reacted in the Parr reactor had lower molecular weight than those from the atmospheric three neck flask. The infrared spectra of the LWPF resins were similar to that of the conventional novolac resin but showed a major difference at the 1800–1600 cm-1 region. These results indicate that liquefied wood could partially substitute phenol in the novolac resin synthesis. The composites with the liquefied wood resin from the sealed Parr reactor yielded higher thickness swelling than those with the liquefied wood resin from the three neck flask likely due to the hydrophilic wood components incorporated in it and the lower cross-link density than the liquefied wood resin from the three neck flask during the resin cure process. Novolakartige LWPF-Harze wurden aus verflüssigtem Holz und Formaldehyd synthetisch hergestellt. Das mittlere Molekülgewicht des LWPF-Harzes, das aus verflüssigtem Holz in einem atmosphärischen Dreihals-Kolben hergestellt worden war, nahm mit steigendem Phenol/Holz-Verhältnis (P/W) zu, wohingegen es bei der Herstellung in einem versiegelten Parr Reaktor mit steigendem P/W-Verhältnis abnahm. LWPF-Harz, das aus verflüssigtem Holz in einem Parr Reaktor hergestellt worden war, hatte durchschnittlich ein niedrigeres Molekülgewicht als LWPF-Harz, das in einem atmosphärischen Dreihals-Kolben hergestellt worden war. Die Infrarot-Spektren der LWPF-Harze ähnelten denjenigen von konventionellem Novolak Harz, unterschieden sich jedoch im 1800–1600 cm-1 Bereich deutlich. Diese Ergebnisse zeigen, dass das Phenol bei der Synthese von Novolak-Harz teilweise durch verflüssigtes Holz ersetzt werden kann. Verbundwerkstoffe mit LWPF-Harz, das aus verflüssigtem Holz im versiegelten Parr Reaktor hergestellt worden war, wiesen eine höhere Dickenquellung auf als diejenigen mit LWPF-Harz, das im Dreihals-Kolben hergestellt worden war. Der Grund besteht wahrscheinlich in den im Vergleich zu LWPF-Harz aus dem Dreihals-Kolben eingebundenen hydrophilen Holzbestandteilen und der niedrigeren Vernetzungsdichte während der Aushärtung.",
"title": ""
},
{
"docid": "6c22d7219bd120e6eeb3971164d9088f",
"text": "We propose a novel generative model of human motion that can be trained using a large motion capture dataset, and allows users to produce animations from high-level control signals. As previous architectures struggle to predict motions far into the future due to the inherent ambiguity, we argue that a user-provided control signal is desirable for animators and greatly reduces the predictive error for long sequences. Thus, we formulate a framework which explicitly introduces an encoding of control signals into a variational inference framework trained to learn the manifold of human motion. As part of this framework, we formulate a prior on the latent space, which allows us to generate high-quality motion without providing frames from an existing sequence. We further model the sequential nature of the task by combining samples from a variational approximation to the intractable posterior with the control signal through a recurrent neural network (RNN) that synthesizes the motion. We show that our system can predict the movements of the human body over long horizons more accurately than state-of-theart methods. Finally, the design of our system considers practical use cases and thus provides a competitive approach to motion synthesis.",
"title": ""
}
] | scidocsrr |
0216ea0249466bf849388281f98a4f11 | An Object Co-occurrence Assisted Hierarchical Model for Scene Understanding | [
{
"docid": "c8d9ec6aa63b783e4c591dccdbececcf",
"text": "The use of context is critical for scene understanding in computer vision, where the recognition of an object is driven by both local appearance and the object’s relationship to other elements of the scene (context). Most current approaches rely on modeling the relationships between object categories as a source of context. In this paper we seek to move beyond categories to provide a richer appearancebased model of context. We present an exemplar-based model of objects and their relationships, the Visual Memex, that encodes both local appearance and 2D spatial context between object instances. We evaluate our model on Torralba’s proposed Context Challenge against a baseline category-based system. Our experiments suggest that moving beyond categories for context modeling appears to be quite beneficial, and may be the critical missing ingredient in scene understanding systems.",
"title": ""
},
{
"docid": "77d354505cdd474c1b381b415f115ca0",
"text": "Scene recognition is a highly valuable perceptual ability for an indoor mobile robot, however, current approaches for scene recognition present a significant drop in performance for the case of indoor scenes. We believe that this can be explained by the high appearance variability of indoor environments. This stresses the need to include high-level semantic information in the recognition process. In this work we propose a new approach for indoor scene recognition based on a generative probabilistic hierarchical model that uses common objects as an intermediate semantic representation. Under this model, we use object classifiers to associate low-level visual features to objects, and at the same time, we use contextual relations to associate objects to scenes. As a further contribution, we improve the performance of current state-of-the-art category-level object classifiers by including geometrical information obtained from a 3D range sensor that facilitates the implementation of a focus of attention mechanism within a Monte Carlo sampling scheme. We test our approach using real data, showing significant advantages with respect to previous state-of-the-art methods.",
"title": ""
}
] | [
{
"docid": "026628151680da901c741766248f0055",
"text": "We analyzea corpusof referringexpressionscollected from userinteractionswith a multimodal travel guide application.Theanalysissuggeststhat,in dramaticcontrastto normalmodesof human-humaninteraction,the interpretationof referringexpressionscanbecomputed with very high accuracy usinga modelwhich pairsan impoverishednotionof discoursestatewith asimpleset of rulesthatareinsensiti ve to the type of referringexpressionused. We attribute this result to the implicit mannerin which theinterfaceconveys thesystem’ s beliefs abouttheoperati ve discoursestate,to which users tailor their choiceof referringexpressions.This result offersnew insightinto thewaycomputerinterfacescan shapea user’ s languagebehavior, insightswhich can be exploited to bring otherwisedifficult interpretation problemsinto therealmof tractability.",
"title": ""
},
{
"docid": "93a9fdca133adfd8b6e7b8f030e95622",
"text": "Prostate segmentation from Magnetic Resonance (MR) images plays an important role in image guided intervention. However, the lack of clear boundary specifically at the apex and base, and huge variation of shape and texture between the images from different patients make the task very challenging. To overcome these problems, in this paper, we propose a deeply supervised convolutional neural network (CNN) utilizing the convolutional information to accurately segment the prostate from MR images. The proposed model can effectively detect the prostate region with additional deeply supervised layers compared with other approaches. Since some information will be abandoned after convolution, it is necessary to pass the features extracted from early stages to later stages. The experimental results show that significant segmentation accuracy improvement has been achieved by our proposed method compared to other reported approaches.",
"title": ""
},
{
"docid": "e162fcb6b897e941cd26558f4ed16cd5",
"text": "In this paper, we propose a novel real-valued time-delay neural network (RVTDNN) suitable for dynamic modeling of the baseband nonlinear behaviors of third-generation (3G) base-station power amplifiers (PA). Parameters (weights and biases) of the proposed model are identified using the back-propagation algorithm, which is applied to the input and output waveforms of the PA recorded under real operation conditions. Time- and frequency-domain simulation of a 90-W LDMOS PA output using this novel neural-network model exhibit a good agreement between the RVTDNN behavioral model's predicted results and measured ones along with a good generality. Moreover, dynamic AM/AM and AM/PM characteristics obtained using the proposed model demonstrated that the RVTDNN can track and account for the memory effects of the PAs well. These characteristics also point out that the small-signal response of the LDMOS PA is more affected by the memory effects than the PAs large-signal response when it is driven by 3G signals. This RVTDNN model requires a significantly reduced complexity and shorter processing time in the analysis and training procedures, when driven with complex modulated and highly varying envelope signals such as 3G signals, than previously published neural-network-based PA models.",
"title": ""
},
{
"docid": "178dc3f162f0a4bd2a43ae4da72478cc",
"text": "Regularisation of deep neural networks (DNN) during training is critical to performance. By far the most popular method is known as dropout. Here, cast through the prism of signal processing theory, we compare and c ontrast the regularisation effects of dropout with those of dither. We illustrate some serious inherent limitations of dropout and demonstrate that dither provides a far more effecti ve regulariser which does not suffer from the same limitations.",
"title": ""
},
{
"docid": "413a08e904839edb6fd2e031d8bdc807",
"text": "A data collection instrument that a respondent self-completes through the visual channel, such as on paper or over the Web, is visually administered. Although insightful in many ways, traditional methods of evaluating questionnaires, such as cognitive interviewing, usability testing, and experimentation may be insufficient when it comes to evaluating the design of visually administered questionnaires because these methods cannot directly identify information respondents perceive or the precise order in which they observe the information (Redline et al 1998). In this paper, we present the results of a study that was conducted to explore whether eye-movement analysis might prove a promising new tool for evaluating the design of visually administered questionnaires. Eye tracking hardware and software, which were originally developed at the Human for use with computer monitors, were adapted to track the eye movements of respondents answering three versions of a paper questionnaire. These versions were chosen for study because differences in the design of their branching instructions were hypothesized to affect eye-movements, which in turn may affect the accuracy of following the branching instructions (Redline and Dillman Forthcoming). Background Eye-movement analysis has been used in other fields, most notably reading and scene perception, to study cognitive processing (e.g., Rayner 1992; Rayner 1983). However, survey design research grew out of the interviewer-administered realm, which has been primarily focused on respondents' comprehension of the spoken language of questionnaires. Therefore, the mechanism by which respondents perceive information presented on paper questionnaires or over the Web, the eyes and their movements, has not received much attention until recently. Other reasons for the lack of eye-movement research in the survey field are its cost and relative difficulty. As others have noted, eye-movement research requires specialized knowledge, equipment and expertise to operate the equipment. In addition,",
"title": ""
},
{
"docid": "25e50a3e98b58f833e1dd47aec94db21",
"text": "Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.",
"title": ""
},
{
"docid": "7603ee2e0519b727de6dc29e05b2049f",
"text": "To what extent do we share feelings with others? Neuroimaging investigations of the neural mechanisms involved in the perception of pain in others may cast light on one basic component of human empathy, the interpersonal sharing of affect. In this fMRI study, participants were shown a series of still photographs of hands and feet in situations that are likely to cause pain, and a matched set of control photographs without any painful events. They were asked to assess on-line the level of pain experienced by the person in the photographs. The results demonstrated that perceiving and assessing painful situations in others was associated with significant bilateral changes in activity in several regions notably, the anterior cingulate, the anterior insula, the cerebellum, and to a lesser extent the thalamus. These regions are known to play a significant role in pain processing. Finally, the activity in the anterior cingulate was strongly correlated with the participants' ratings of the others' pain, suggesting that the activity of this brain region is modulated according to subjects' reactivity to the pain of others. Our findings suggest that there is a partial cerebral commonality between perceiving pain in another individual and experiencing it oneself. This study adds to our understanding of the neurological mechanisms implicated in intersubjectivity and human empathy.",
"title": ""
},
{
"docid": "ffbebb5d8f4d269353f95596c156ba5c",
"text": "Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model’s output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. Next, we show an extension of the semi-honest protocol that obtains one-sided security against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate tenfold improvements in computation and bandwidth.",
"title": ""
},
{
"docid": "7efa3543711bc1bb6e3a893ed424b75d",
"text": "This dissertation is concerned with the creation of training data and the development of probability models for statistical parsing of English with Combinatory Categorial Grammar (CCG). Parsing, or syntactic analysis, is a prerequisite for semantic interpretation, and forms therefore an integral part of any system which requires natural language understanding. Since almost all naturally occurring sentences are ambiguous, it is not sufficient (and often impossible) to generate all possible syntactic analyses. Instead, the parser needs to rank competing analyses and select only the most likely ones. A statistical parser uses a probability model to perform this task. I propose a number of ways in which such probability models can be defined for CCG. The kinds of models developed in this dissertation, generative models over normal-form derivation trees, are particularly simple, and have the further property of restricting the set of syntactic analyses to those corresponding to a canonical derivation structure. This is important to guarantee that parsing can be done efficiently. In order to achieve high parsing accuracy, a large corpus of annotated data is required to estimate the parameters of the probability models. Most existing wide-coverage statistical parsers use models of phrase-structure trees estimated from the Penn Treebank, a 1-million-word corpus of manually annotated sentences from the Wall Street Journal. This dissertation presents an algorithm which translates the phrase-structure analyses of the Penn Treebank to CCG derivations. The resulting corpus, CCGbank, is used to train and test the models proposed in this dissertation. Experimental results indicate that parsing accuracy (when evaluated according to a comparable metric, the recovery of unlabelled word-word dependency relations), is as high as that of standard Penn Treebank parsers which use similar modelling techniques. Most existing wide-coverage statistical parsers use simple phrase-structure grammars whose syntactic analyses fail to capture long-range dependencies, and therefore do not correspond to directly interpretable semantic representations. By contrast, CCG is a grammar formalism in which semantic representations that include long-range dependencies can be built directly during the derivation of syntactic structure. These dependencies define the predicate-argument structure of a sentence, and are used for two purposes in this dissertation: First, the performance of the parser can be evaluated according to how well it recovers these dependencies. In contrast to purely syntactic evaluations, this yields a direct measure of how accurate the semantic interpretations returned by the parser are. Second, I propose a generative model that captures the local and non-local dependencies in the predicate-argument structure, and investigate the impact of modelling non-local in addition to local dependencies.",
"title": ""
},
{
"docid": "ab75cb747666f6b115a94f1dfb627d63",
"text": "Over the last years, Enterprise Social Networks (ESN) have gained increasing attention both in academia and practice, resulting in a large number of publications dealing with ESN. Among them is a large number of case studies describing the benefits of ESN in each individual case. Based on the different research objects they focus, various benefits are described. However, an overview of the benefits achieved by using ESN is missing and will, thus, be elaborated in this article (research question 1). Further, we cluster the identified benefits to more generic categories and finally classify them to the capabilities of traditional IT as presented by Davenport and Short (1990) to determine if new capabilities of IT arise using ESN (research question 2). To address our research questions, we perform a qualitative content analysis on 37 ESN case studies. As a result, we identify 99 individual benefits, classify them to the capabilities of traditional IT, and define a new IT capability named Social Capital. Our results can, e.g., be used to align and expand current ESN success measurement approaches.",
"title": ""
},
{
"docid": "8de4182b607888e6c7cbe6d6ae8ee122",
"text": "In this article, we focus on isolated gesture recognition and explore different modalities by involving RGB stream, depth stream, and saliency stream for inspection. Our goal is to push the boundary of this realm even further by proposing a unified framework that exploits the advantages of multi-modality fusion. Specifically, a spatial-temporal network architecture based on consensus-voting has been proposed to explicitly model the long-term structure of the video sequence and to reduce estimation variance when confronted with comprehensive inter-class variations. In addition, a three-dimensional depth-saliency convolutional network is aggregated in parallel to capture subtle motion characteristics. Extensive experiments are done to analyze the performance of each component and our proposed approach achieves the best results on two public benchmarks, ChaLearn IsoGD and RGBD-HuDaAct, outperforming the closest competitor by a margin of over 10% and 15%, respectively. Our project and codes will be released at https://davidsonic.github.io/index/acm_tomm_2017.html.",
"title": ""
},
{
"docid": "ea29dbae2b19f4b8af208aa551744a07",
"text": "This paper presents a general vector-valued reproducing kernel Hilbert spaces (RKHS) formulation for the problem of learning an unknown functional dependency between a structured input space and a structured output space, in the Semi-Supervised Learning setting. Our formulation includes as special cases Vector-valued Manifold Regularization and Multi-view Learning, thus provides in particular a unifying framework linking these two important learning approaches. In the case of least square loss function, we provide a closed form solution with an efficient implementation. Numerical experiments on challenging multi-class categorization problems show that our multi-view learning formulation achieves results which are comparable with state of the art and are significantly better than single-view learning.",
"title": ""
},
{
"docid": "879af50edd27c74bde5b656d0421059a",
"text": "In this thesis we present an approach to adapt the Single Shot multibox Detector (SSD) for face detection. Our experiments are performed on the WIDER dataset which contains a large amount of small faces (faces of 50 pixels or less). The results show that the SSD method performs poorly on the small/hard subset of this dataset. We analyze the influence of increasing the resolution during inference and training time. Building on this analysis we present two additions to the SSD method. The first addition is changing the SSD architecture to an image pyramid architecture. The second addition is creating a selection criteria on each of the different branches of the image pyramid architecture. The results show that increasing the resolution, even during inference, increases the performance for the small/hard subset. By combining resolutions in an image pyramid structure we observe that the performance keeps consistent across different sizes of faces. Finally, the results show that adding a selection criteria on each branch of the image pyramid further increases performance, because the selection criteria negates the competing behaviour of the image pyramid. We conclude that our approach not only increases performance on the small/hard subset of the WIDER dataset but keeps on performing well on the large subset.",
"title": ""
},
{
"docid": "8a41d0190ae25baf0a270d9524ea99d3",
"text": "Hybrid AC/DC microgrid is a compromised solution to cater for the increasing penetration of DC-compatible energy sources, storages and loads. In this paper, DC/DC converter with High Frequency Transformer (DHFT) is proposed to replace the conventional bulky transformer for bus voltage matching and galvanic isolation. Various DHFT topologies have been compared and CLLC-type has been recommended due to its capabilities of bidirectional power flow, seamless transition and low switching loss. Different operating scenarios of the hybrid AC/DC microgrid have been analyzed and DHFT open-loop control has been selected to simplify systematic coordination. DHFT are designed in order to maximize the conversion efficiency and minimize output voltage variations in different loading conditions. Lab-scale prototypes of the DHFT and hybrid AC/DC microgrid have been developed for experimental verifications. The performances of DHFT and system in both steady state and transient states have been confirmed.",
"title": ""
},
{
"docid": "2e2e8219b7870529e8ca17025190aa1b",
"text": "M multitasking competes with television advertising for consumers’ attention, but may also facilitate immediate and measurable response to some advertisements. This paper explores whether and how television advertising influences online shopping. We construct a massive data set spanning $3.4 billion in spending by 20 brands, measures of brands’ website traffic and transactions, and ad content measures for 1,224 commercials. We use a quasi-experimental design to estimate whether and how TV advertising influences changes in online shopping within two-minute pre/post windows of time. We use nonadvertising competitors’ online shopping in a difference-in-differences approach to measure the same effects in two-hour windows around the time of the ad. The findings indicate that television advertising does influence online shopping and that advertising content plays a key role. Action-focus content increases direct website traffic and sales. Information-focus and emotion-focus ad content actually reduce website traffic while simultaneously increasing purchases, with a positive net effect on sales for most brands. These results imply that brands seeking to attract multitaskers’ attention and dollars must select their advertising copy carefully.",
"title": ""
},
{
"docid": "4ed98f4c2e09f8f3b81f2f7faa2ad573",
"text": "The current nursing shortage and high turnover is of great concern in many countries because of its impact upon the efficiency and effectiveness of any health-care delivery system. Recruitment and retention of nurses are persistent problems associated with job satisfaction. This paper analyses the growing literature relating to job satisfaction among nurses and concludes that more research is required to understand the relative importance of the many identified factors to job satisfaction. It is argued that the absence of a robust causal model incorporating organizational, professional and personal variables is undermining the development of interventions to improve nurse retention.",
"title": ""
},
{
"docid": "eae0f8a921b301e52c822121de6c6b58",
"text": "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10× more layers. The source code for the complete system are publicly available1.",
"title": ""
},
{
"docid": "18c190df7c133085d418c58357b4c81c",
"text": "Attention can be improved by repetition of a specific task that involves an attention network (network training), or by exercise or meditation that changes the brain state (state training). We first review the concept of attention networks that link changes in orienting, alerting and executive control to brain networks. Network training through video games or computer exercises can improve aspects of attention. The extent of transfer beyond the trained task is a controversial issue. Mindfulness is a form of meditation that keeps attention focused on the current moment. Some forms of meditation have been shown to improve executive attention reduce stress and produce specific brain changes. Additional research is needed to understand the limits and mechanisms of these effects.",
"title": ""
},
{
"docid": "1aaa0e23d795121fbe5673873ea2aea7",
"text": "The fifth generation of mobile networks is planned to be commercially available in a few years. The scope of 5G goes beyond introducing new radio interfaces, and will include new services like low-latency industrial applications, as well as new deployment models such as cooperative cells and densification through small cells. An efficient realization of these new features greatly benefit from tight coordination among radio and transport network resources, something that is missing in current networks. In this article, we first present an overview of the benefits and technical requirements of resource coordination across radio and transport networks in the context of 5G. Then, we discuss how SDN principles can bring programmability to both the transport and radio domains, which in turn enables the design of a hierarchical, modular, and programmable control and orchestration plane across the domains. Finally, we introduce two use cases of SDN-based transport and RAN orchestration, and present an experimental implementation of them in a testbed in our lab, which confirms the feasibility and benefits of the proposed orchestration.",
"title": ""
},
{
"docid": "12363d704fcfe9fef767c5e27140c214",
"text": "The application range of UAVs (unmanned aerial vehicles) is expanding along with performance upgrades. Vertical take-off and landing (VTOL) aircraft has the merits of both fixed-wing and rotary-wing aircraft. Tail-sitting is the simplest way for the VTOL maneuver since it does not need extra actuators. However, conventional hovering control for a tail-sitter UAV is not robust enough against large disturbance such as a blast of wind, a bird strike, and so on. It is experimentally observed that the conventional quaternion feedback hovering control often fails to keep stability when the control compensates large attitude errors. This paper proposes a novel hovering control strategy for a tail-sitter VTOL UAV that increases stability against large disturbance. In order to verify the proposed hovering control strategy, simulations and experiments on hovering of the UAV are performed giving large attitude errors. The results show that the proposed control strategy successfully compensates initial large attitude errors keeping stability, while the conventional quaternion feedback controller fails.",
"title": ""
}
] | scidocsrr |
546a64b871f37f1b67c7731641cd8ce4 | Assessment , Enhancement , and Verification Determinants of the Self-Evaluation Process | [
{
"docid": "0b88b9b165a74cc630a0cf033308d6c2",
"text": "It is proposed that motivation may affect reasoning through reliance on a biased set of cognitive processes--that is, strategies for accessing, constructing, and evaluating beliefs. The motivation to be accurate enhances use of those beliefs and strategies that are considered most appropriate, whereas the motivation to arrive at particular conclusions enhances use of those that are considered most likely to yield the desired conclusion. There is considerable evidence that people are more likely to arrive at conclusions that they want to arrive at, but their ability to do so is constrained by their ability to construct seemingly reasonable justifications for these conclusions. These ideas can account for a wide variety of research concerned with motivated reasoning.",
"title": ""
}
] | [
{
"docid": "9775396477ccfde5abdd766588655539",
"text": "The use of hand gestures offers an alternative to the commonly used human computer interfaces, providing a more intuitive way of navigating among menus and multimedia applications. This paper presents a system for hand gesture recognition devoted to control windows applications. Starting from the images captured by a time-of-flight camera (a camera that produces images with an intensity level inversely proportional to the depth of the objects observed) the system performs hand segmentation as well as a low-level extraction of potentially relevant features which are related to the morphological representation of the hand silhouette. Classification based on these features discriminates between a set of possible static hand postures which results, combined with the estimated motion pattern of the hand, in the recognition of dynamic hand gestures. The whole system works in real-time, allowing practical interaction between user and application.",
"title": ""
},
{
"docid": "f462de59dd8b45f7c7e27672125010d2",
"text": "Researchers have recently noted (14; 27) the potential of fast poisoning attacks against DNS servers, which allows attackers to easily manipulate records in open recursive DNS resolvers. A vendor-wide upgrade mitigated but did not eliminate this attack. Further, existing DNS protection systems, including bailiwick-checking (12) and IDS-style filtration, do not stop this type of DNS poisoning. We therefore propose Anax, a DNS protection system that detects poisoned records in cache. Our system can observe changes in cached DNS records, and applies machine learning to classify these updates as malicious or benign. We describe our classification features and machine learning model selection process while noting that the proposed approach is easily integrated into existing local network protection systems. To evaluate Anax, we studied cache changes in a geographically diverse set of 300,000 open recursive DNS servers (ORDNSs) over an eight month period. Using hand-verified data as ground truth, evaluation of Anax showed a very low false positive rate (0.6% of all new resource records) and a high detection",
"title": ""
},
{
"docid": "fb44e3c2624d92c9ed408ebd00bdb793",
"text": "A novel method for online data acquisition of cursive handwriting is described. A video camera is used to record the handwriting of a user. From the acquired sequence of images, the movement of the tip of the pen is reconstructed. A prototype of the system has been implemented and tested. In one series of tests, the performance of the system was visually assessed. In another series of experiments, the system was combined with an existing online handwriting recognizer. Good results have been obtained in both sets of experiments.",
"title": ""
},
{
"docid": "3a17d60c2eb1df3bf491be3297cffe79",
"text": "Received: 3 October 2009 Revised: 22 June 2011 Accepted: 3 July 2011 Abstract Studies claiming to use the Grounded theory methodology (GTM) have been quite prevalent in information systems (IS) literature. A cursory review of this literature reveals conflict in the understanding of GTM, with a variety of grounded theory approaches apparent. The purpose of this investigation was to establish what alternative grounded theory approaches have been employed in IS, and to what extent each has been used. In order to accomplish this goal, a comprehensive set of IS articles that claimed to have followed a grounded theory approach were reviewed. The articles chosen were those published in the widely acknowledged top eight IS-centric journals, since these journals most closely represent exemplar IS research. Articles for the period 1985-2008 were examined. The analysis revealed four main grounded theory approaches in use, namely (1) the classic grounded theory approach, (2) the evolved grounded theory approach, (3) the use of the grounded theory approach as part of a mixed methodology, and (4) the application of grounded theory techniques, typically for data analysis purposes. The latter has been the most common approach in IS research. The classic approach was the least often employed, with many studies opting for an evolved or mixed method approach. These and other findings are discussed and implications drawn. European Journal of Information Systems (2013) 22, 119–129. doi:10.1057/ejis.2011.35; published online 30 August 2011",
"title": ""
},
{
"docid": "98c3588648676eea3bb78a43aef92af4",
"text": "Data mining (DM) techniques are being increasingly used in many modern organizations to retrieve valuable knowledge structures from organizational databases, including data warehouses. An important knowledge structure that can result from data mining activities is the decision tree (DT) that is used for the classi3cation of future events. The induction of the decision tree is done using a supervised knowledge discovery process in which prior knowledge regarding classes in the database is used to guide the discovery. The generation of a DT is a relatively easy task but in order to select the most appropriate DT it is necessary for the DM project team to generate and analyze a signi3cant number of DTs based on multiple performance measures. We propose a multi-criteria decision analysis based process that would empower DM project teams to do thorough experimentation and analysis without being overwhelmed by the task of analyzing a signi3cant number of DTs would o7er a positive contribution to the DM process. We also o7er some new approaches for measuring some of the performance criteria. ? 2003 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "31add593ce5597c24666d9662b3db89d",
"text": "Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of dressed human body scans. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2, 18] as statistical model.",
"title": ""
},
{
"docid": "42d2f3c2cc7ed0c08dd8f450091e5a7a",
"text": "Analytical methods validation is an important regulatory requirement in pharmaceutical analysis. High-Performance Liquid Chromatography (HPLC) is commonly used as an analytical technique in developing and validating assay methods for drug products and drug substances. Method validation provides documented evidence, and a high degree of assurance, that an analytical method employed for a specific test, is suitable for its intended use. Over recent years, regulatory authorities have become increasingly aware of the necessity of ensuring that the data submitted to them in applications for marketing authorizations have been acquired using validated analytical methodology. The International Conference on Harmonization (ICH) has introduced guidelines for analytical methods validation. 1,2 The U.S. Food and Drug Administration (FDA) methods validation draft guidance document, 3-5 as well as United States Pharmacopoeia (USP) both refer to ICH guidelines. These draft guidances define regulatory and alternative analytical procedures and stability-indicating assays. The FDA has proposed adding section CFR 211.222 on analytical methods validation to the current Good Manufacturing Practice (cGMP) regulations. 7 This would require pharmaceutical manufacturers to establish and document the accuracy, sensitivity, specificity, reproducibility, and any other attribute (e.g., system suitability, stability of solutions) necessary to validate test methods. Regulatory analytical procedures are of two types: compendial and noncompendial. The noncompendial analytical procedures in the USP are those legally recognized as regulatory procedures under section 501(b) of the Federal Food, Drug and Cosmetic Act. When using USP analytical methods, the guidance recommends that information be provided for the following characteristics: specificity of the method, stability of the analytical sample solution, and intermediate precision. Compendial analytical methods may not be stability indicating, and this concern must be addressed when developing a drug product specification, because formulation based interference may not be considered in the monograph specifications. Additional analytical tests for impurities may be necessary to support the quality of the drug substance or drug product. Noncompendial analytical methods must be fully validated. The most widely applied validation characteristics are accuracy, precision (repeatability and intermediate precision), specificity, detection limit, quantitation limit, linearity, range, and stability of analytical solutions. The parameters that require validation and the approach adopted for each particular case are dependent on the type and applications of the method. Before undertaking the task of method validation, it is necessary that the analytical system itself is adequately designed, maintained, calibrated, and validated. 8 The first step in method validation is to prepare a protocol, preferably written with the instructions in a clear step-by-step format. This A Practical Approach to Validation of HPLC Methods Under Current Good Manufacturing Practices",
"title": ""
},
{
"docid": "2f5776d8ce9714dcee8d458b83072f74",
"text": "The componential theory of creativity is a comprehensive model of the social and psychological components necessary for an individual to produce creative work. The theory is grounded in a definition of creativity as the production of ideas or outcomes that are both novel and appropriate to some goal. In this theory, four components are necessary for any creative response: three components within the individual – domainrelevant skills, creativity-relevant processes, and intrinsic task motivation – and one component outside the individual – the social environment in which the individual is working. The current version of the theory encompasses organizational creativity and innovation, carrying implications for the work environments created by managers. This entry defines the components of creativity and how they influence the creative process, describing modifications to the theory over time. Then, after comparing the componential theory to other creativity theories, the article describes this theory’s evolution and impact.",
"title": ""
},
{
"docid": "981b4977ed3524545d9ae5016d45c8d6",
"text": "Related to different international activities in the Optical Wireless Communications (OWC) field Graz University of Technology (TUG) has high experience on developing different high data rate transmission systems and is well known for measurements and analysis of the OWC-channel. In this paper, a novel approach for testing Free Space Optical (FSO) systems in a controlled laboratory condition is proposed. Based on fibre optics technology, TUG testbed could effectively emulate the operation of real wireless optical communication systems together with various atmospheric perturbation effects such as fog and clouds. The suggested architecture applies an optical variable attenuator as a main device representing the tropospheric influences over the launched Gaussian beam in the free space channel. In addition, the current scheme involves an attenuator control unit with an external Digital Analog Converter (DAC) controlled by self-developed software. To obtain optimal results in terms of the presented setup, a calibration process including linearization of the non-linear attenuation versus voltage graph is performed. Finally, analytical results of the attenuation based on real measurements with the hardware channel emulator under laboratory conditions are shown. The implementation can be used in further activities to verify OWC-systems, before testing under real conditions.",
"title": ""
},
{
"docid": "048cc782baeec3a7f46ef5ee7abf0219",
"text": "Autoerotic asphyxiation is an unusual but increasingly more frequently occurring phenomenon, with >1000 fatalities in the United States per year. Understanding of this manner of death is likewise increasing, as noted by the growing number of cases reported in the literature. However, this form of accidental death is much less frequently seen in females (male:female ratio >50:1), and there is correspondingly less literature on female victims of autoerotic asphyxiation. The authors present the case of a 31-year-old woman who died of an autoerotic ligature strangulation and review the current literature on the subject. The forensic examiner must be able to discern this syndrome from similar forms of accidental and suicidal death, and from homicidal hanging/strangulation.",
"title": ""
},
{
"docid": "f262e911b5254ad4d4419ed7114b8a4f",
"text": "User Satisfaction is one of the most extensively used dimensions for Information Systems (IS) success evaluation with a large body of literature and standardized instruments of User Satisfaction. Despite the extensive literature on User Satisfaction, there exist much controversy over the measures of User Satisfaction and the adequacy of User Satisfaction measures to gauge the level of success in complex, contemporary IS. Recent studies in IS have suggested treating User Satisfaction as an overarching construct of success, rather than a measure of success. Further perplexity is introduced over the alleged overlaps between User Satisfaction measures and the measures of IS success (e.g. system quality, information quality) suggested in the literature. The following study attempts to clarify the aforementioned confusions by gathering data from 310 Enterprise System users and analyzing 16 User Satisfaction instruments. The statistical analysis of the 310 responses and the content analysis of the 16 instruments suggest the appropriateness of treating User Satisfaction as an overarching measure of success rather a dimension of success.",
"title": ""
},
{
"docid": "3b32ade20fbdd7474ee10fc10d80d90a",
"text": "We report the modulation performance of micro-light-emitting diode arrays with peak emission ranging from 370 to 520 nm, and emitter diameters ranging from 14 to 84 μm. Bandwidths in excess of 400 MHz and error-free data transmission up to 1.1Gbit/s is shown. These devices are shown integrated with electronic drivers, allowing convenient control of individual array emitters. Transmission using such a device is shown at 512 Mbit/s.",
"title": ""
},
{
"docid": "e1d9ff28da38fcf8ea3a428e7990af25",
"text": "The Autonomous car is a complex topic, different technical fields like: Automotive engineering, Control engineering, Informatics, Artificial Intelligence etc. are involved in solving the human driver replacement with an artificial (agent) driver. The problem is even more complicated because usually, nowadays, having and driving a car defines our lifestyle. This means that the mentioned (major) transformation is also a cultural issue. The paper will start with the mentioned cultural aspects related to a self-driving car and will continue with the big picture of the system.",
"title": ""
},
{
"docid": "715fda02bad1633be9097cc0a0e68c8d",
"text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.",
"title": ""
},
{
"docid": "7dd3183ee59b800f3391f893d3578d64",
"text": "This paper reports on a bio-inspired angular accelerometer based on a two-mask microfluidic process using a PDMS mold. The sensor is inspired by the semicircular canals in mammalian vestibular systems and pairs a fluid-filled microtorus with a thermal detection principle based on thermal convection. With inherent linear acceleration insensitivity, the sensor features a sensitivity of 29.8μV/deg/s2=1.7mV/rad/s2, a dynamic range of 14,000deg/s2 and a detection limit of ~20deg/s2.",
"title": ""
},
{
"docid": "76a2bc6a8649ffe9111bfaa911572c9d",
"text": "URL shortening services have become extremely popular. However, it is still unclear whether they are an effective and reliable tool that can be leveraged to hide malicious URLs, and to what extent these abuses can impact the end users. With these questions in mind, we first analyzed existing countermeasures adopted by popular shortening services. Surprisingly, we found such countermeasures to be ineffective and trivial to bypass. This first measurement motivated us to proceed further with a large-scale collection of the HTTP interactions that originate when web users access live pages that contain short URLs. To this end, we monitored 622 distinct URL shortening services between March 2010 and April 2012, and collected 24,953,881 distinct short URLs. With this large dataset, we studied the abuse of short URLs. Despite short URLs are a significant, new security risk, in accordance with the reports resulting from the observation of the overall phishing and spamming activity, we found that only a relatively small fraction of users ever encountered malicious short URLs. Interestingly, during the second year of measurement, we noticed an increased percentage of short URLs being abused for drive-by download campaigns and a decreased percentage of short URLs being abused for spam campaigns. In addition to these security-related findings, our unique monitoring infrastructure and large dataset allowed us to complement previous research on short URLs and analyze these web services from the user's perspective.",
"title": ""
},
{
"docid": "20be8363ae04659061a56a1c7d3ee4d5",
"text": "The popularity of level sets for segmentation is mainly based on the sound and convenient treatment of regions and their boundaries. Unfortunately, this convenience is so far not known from level set methods when applied to images with more than two regions. This communication introduces a comparatively simple way how to extend active contours to multiple regions keeping the familiar quality of the two-phase case. We further suggest a strategy to determine the optimum number of regions as well as initializations for the contours",
"title": ""
},
{
"docid": "1b92f2391b35ca30b86f6d5e8fae7ffe",
"text": "In this paper, two novel compact diplexers for satellite applications are presented. The first covers the Ku-band with two closely spaced channels (Ku-transmission band: 10.7–13 GHz and Ku-reception band: 13.75–14.8 GHz). The second is wider than the first (overall bandwidth up to 50%) achieves the suppression of the higher order modes, and covers the Ku/K-band with a reception channel between 17.2 and 18.5 GHz. Both diplexers are composed of two novel bandpass filters, joined together with an E-plane T-junction. The bandpass filters are designed by combining a low-pass filtering function (based on $\\lambda $ /4-step-shaped band-stop elements separated by very short waveguide sections) and a high-pass filtering structure (based on the waveguide propagation cutoff effect). The novel diplexers show a very compact footprint and very relaxed fabrication tolerances, and are especially attractive for wideband applications. A prototype Ku/K-band diplexer has also been fabricated by milling. Measurements show a very good agreement with simulations, thereby demonstrating the validity and manufacturing robustness of the proposed topology.",
"title": ""
},
{
"docid": "e17f9e8d57c98928ecccb27e3259f2a3",
"text": "A broadcast encryption scheme allows the sender to securely distribute data to a dynamically changing set of users over an insecure channel. It has numerous applications including pay-TV systems, distribution of copyrighted material, streaming audio/video and many others. One of the most challenging settings for this problem is that of stateless receivers, where each user is given a fixed set of keys which cannot be updated through the lifetime of the system. This setting was considered by Naor, Naor and Lotspiech [NNL01], who also present a very efficient “subset difference” (SD) method for solving this problem. The efficiency of this method (which also enjoys efficient traitor tracing mechanism and several other useful features) was recently improved by Halevi and Shamir [HS02], who called their refinement the “Layered SD” (LSD) method. Both of the above methods were originally designed to work in the centralized (symmetric key) setting, where only the trusted designer of the system can encrypt messages to users. On the other hand, in many applications it is desirable not to store the secret keys “on-line”, or to allow untrusted users to broadcast information. This leads to the question of building a public key broadcast encryption scheme for stateless receivers; in particular, of extending the elegant SD/LSD methods to the public key setting. Unfortunately, Naor et al. [NNL01] notice that the natural technique for doing so will result in an enormous public key and very large storage for every user. In fact, [NNL01] pose this question of reducing the public key size and user’s storage as the first open problem of their paper. We resolve this question in the affirmative, by demonstrating that an O(1) size public key can be achieved for both of SD/LSD methods, in addition to the same (small) user’s storage and ciphertext size as in the symmetric key setting. Courant Institute of Mathematical Sciences, New York University.",
"title": ""
},
{
"docid": "212a7c22310977f6b8ada29437668ed5",
"text": "Gait analysis and machine learning classification on healthy subjects in normal walking Tomohiro Shirakawa, Naruhisa Sugiyama, Hiroshi Sato, Kazuki Sakurai & Eri Sato To cite this article: Tomohiro Shirakawa, Naruhisa Sugiyama, Hiroshi Sato, Kazuki Sakurai & Eri Sato (2015): Gait analysis and machine learning classification on healthy subjects in normal walking, International Journal of Parallel, Emergent and Distributed Systems, DOI: 10.1080/17445760.2015.1044007 To link to this article: http://dx.doi.org/10.1080/17445760.2015.1044007",
"title": ""
}
] | scidocsrr |
27714fc493c19c8446f6a3fde6f0c829 | Deep learning in partially-labeled data streams | [
{
"docid": "42043ee6577d791874c1aa34baf81e64",
"text": "Bagging, boosting and Random Forests are classical ensemble methods used to improve the performance of single classifiers. They obtain superior performance by increasing the accuracy and diversity of the single classifiers. Attempts have been made to reproduce these methods in the more challenging context of evolving data streams. In this paper, we propose a new variant of bagging, called leveraging bagging. This method combines the simplicity of bagging with adding more randomization to the input, and output of the classifiers. We test our method by performing an evaluation study on synthetic and real-world datasets comprising up to ten million examples.",
"title": ""
},
{
"docid": "1481e9a8f46290b2082aec098568c755",
"text": "Convolutional Neural Networks (CNN) have demonstrated its successful applications in computer vision, speech recognition, and natural language processing. For object recognition, CNNs might be limited by its strict label requirement and an implicit assumption that images are supposed to be target-object-dominated for optimal solutions. However, the labeling procedure, necessitating laying out the locations of target objects, is very tedious, making high-quality large-scale dataset prohibitively expensive. Data augmentation schemes are widely used when deep networks suffer the insufficient training data problem. All the images produced through data augmentation share the same label, which may be problematic since not all data augmentation methods are label-preserving. In this paper, we propose a weakly supervised CNN framework named Multiple Instance Learning Convolutional Neural Networks (MILCNN) to solve this problem. We apply MILCNN framework to object recognition and report state-of-the-art performance on three benchmark datasets: CIFAR10, CIFAR100 and ILSVRC2015 classification dataset.",
"title": ""
}
] | [
{
"docid": "e292d4af3c77a11e8e2013fca0c8fb04",
"text": "We present in this paper experiments on Table Recognition in hand-written register books. We first explain how the problem of row and column detection is modelled, and then compare two Machine Learning approaches (Conditional Random Field and Graph Convolutional Network) for detecting these table elements. Evaluation was conducted on death records provided by the Archives of the Diocese of Passau. With an F-1 score of 89, both methods provide a quality which allows for Information Extraction. Software and dataset are open source/data.",
"title": ""
},
{
"docid": "19917b734907c41e97c24120fb5be495",
"text": "Providing various wireless connectivities for vehicles enables the communication between vehicles and their internal and external environments. Such a connected vehicle solution is expected to be the next frontier for automotive revolution and the key to the evolution to next generation intelligent transportation systems (ITSs). Moreover, connected vehicles are also the building blocks of emerging Internet of Vehicles (IoV). Extensive research activities and numerous industrial initiatives have paved the way for the coming era of connected vehicles. In this paper, we focus on wireless technologies and potential challenges to provide vehicle-to-x connectivity. In particular, we discuss the challenges and review the state-of-the-art wireless solutions for vehicle-to-sensor, vehicle-to-vehicle, vehicle-to-Internet, and vehicle-to-road infrastructure connectivities. We also identify future research issues for building connected vehicles.",
"title": ""
},
{
"docid": "095ea6721c07be32db3c34da986ab6a9",
"text": "The skin is often viewed as a static barrier that protects the body from the outside world. Emphasis on studying the skin's architecture and biomechanics in the context of restoring skin movement and function is often ignored. It is fundamentally important that if skin is to be modelled or developed, we do not only focus on the biology of skin but also aim to understand its mechanical properties and structure in living dynamic tissue. In this review, we describe the architecture of skin and patterning seen in skin as viewed from a surgical perspective and highlight aspects of the microanatomy that have never fully been realized and provide evidence or concepts that support the importance of studying living skin's dynamic behaviour. We highlight how the structure of the skin has evolved to allow the body dynamic form and function, and how injury, disease or ageing results in a dramatic changes to the microarchitecture and changes physical characteristics of skin. Therefore, appreciating the dynamic microanatomy of skin from the deep fascia through to the skin surface is vitally important from a dermatological and surgical perspective. This focus provides an alternative perspective and approach to addressing skin pathologies and skin ageing.",
"title": ""
},
{
"docid": "e226452a288c3067ef8ee613f0b64090",
"text": "Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQVAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete bottleneck with EM helps us achieve better image generation results on CIFAR-10, and together with knowledge distillation, allows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference.",
"title": ""
},
{
"docid": "a1c9f24275ce626552602cf068776a3c",
"text": "The field of topology optimization seeks to optimize shapes under structural objectives, such as achieving the most rigid shape using a given quantity of material. Besides optimal shape design, these methods are increasingly popular as design tools, since they automatically produce structures having desirable physical properties, a task hard to perform by hand even for skilled designers. However, there is no simple way to control the appearance of the generated objects.\n In this paper, we propose to optimize shapes for both their structural properties and their appearance, the latter being controlled by a user-provided pattern example. These two objectives are challenging to combine, as optimal structural properties fully define the shape, leaving no degrees of freedom for appearance. We propose a new formulation where appearance is optimized as an objective while structural properties serve as constraints. This produces shapes with sufficient rigidity while allowing enough freedom for the appearance of the final structure to resemble the input exemplar.\n Our approach generates rigid shapes using a specified quantity of material while observing optional constraints such as voids, fills, attachment points, and external forces. The appearance is defined by examples, making our technique accessible to casual users. We demonstrate its use in the context of fabrication using a laser cutter to manufacture real objects from optimized shapes.",
"title": ""
},
{
"docid": "88785ff4fe8ff37edebbf8c74f8e2465",
"text": "We propose a data-driven method for automatic deception detection in real-life trial data using visual and verbal cues. Using OpenFace with facial action unit recognition, we analyze the movement of facial features of the witness when posed with questions and the acoustic patterns using OpenSmile. We then perform a lexical analysis on the spoken words, emphasizing the use of pauses and utterance breaks, feeding that to a Support Vector Machine to test deceit or truth prediction. We then try out a method to incorporate utterance-based fusion of visual and lexical analysis, using string based matching.",
"title": ""
},
{
"docid": "110742230132649f178d2fa99c8ffade",
"text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.",
"title": ""
},
{
"docid": "421320aa01ba00a91a843f2c6f710224",
"text": "Visual simulation of natural phenomena has become one of the most important research topics in computer graphics. Such phenomena include water, fire, smoke, clouds, and so on. Recent methods for the simulation of these phenomena utilize techniques developed in computational fluid dynamics. In this paper, the basic equations (Navier-Stokes equations) for simulating these phenomena are briefly described. These basic equations are used to simulate various natural phenomena. This paper then explains our applications of the equations for simulations of smoke, clouds, and aerodynamic sound.",
"title": ""
},
{
"docid": "a451eb3a82208b3826b1525a5f33181c",
"text": "Interleukin (IL)-33 is a member of the IL-1 family of cytokines. IL-33 is a nuclear protein that is also released into the extracellular space, and thus acts as a dual-function molecule, as does IL-1α. Extracellular IL-33 binds to the cell-surface receptor ST2, leading to the activation of intracellular signaling pathways similar to those used by IL-1. Unlike conventional cytokines, IL-33 might be secreted via unconventional pathways, and can be released upon cell injury as an alarmin. IL-33 is expressed in cells that are in contact with the environment, and acts as an early inducer of inflammation. Its production is then upregulated in inflamed tissues, thus contributing to the further amplification of inflammatory responses. Studies of IL-33-deficient mice will provide more information on intracellular functions of this cytokine. A large body of evidence supports the pathogenic role of IL-33 in asthma and possibly other inflammatory airway conditions. Furthermore, IL-33 has been shown to be involved in experimental models of arthritis and potentially has a pathogenic role in ulcerative colitis and fibrotic conditions, suggesting that IL-33 antagonists might be of interest for the treatment of asthma, rheumatoid arthritis and ulcerative colitis. However, IL-33 also appears to exert important functions in host defense against pathogens and to display cardioprotective properties, which might have implications for the clinical use of IL-33 blockade.",
"title": ""
},
{
"docid": "799573bf08fb91b1ac644c979741e7d2",
"text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.",
"title": ""
},
{
"docid": "38fccd4fd4a18c4c4bc9575092a24a3e",
"text": "We investigate the problem of human identity and gender recognition from gait sequences with arbitrary walking directions. Most current approaches make the unrealistic assumption that persons walk along a fixed direction or a pre-defined path. Given a gait sequence collected from arbitrary walking directions, we first obtain human silhouettes by background subtraction and cluster them into several clusters. For each cluster, we compute the cluster-based averaged gait image as features. Then, we propose a sparse reconstruction based metric learning method to learn a distance metric to minimize the intra-class sparse reconstruction errors and maximize the inter-class sparse reconstruction errors simultaneously, so that discriminative information can be exploited for recognition. The experimental results show the efficacy of our approach.",
"title": ""
},
{
"docid": "048646919aaf49a43f7eb32f47ba3041",
"text": "The authors developed and meta-analytically examined hypotheses designed to test and extend work design theory by integrating motivational, social, and work context characteristics. Results from a summary of 259 studies and 219,625 participants showed that 14 work characteristics explained, on average, 43% of the variance in the 19 worker attitudes and behaviors examined. For example, motivational characteristics explained 25% of the variance in subjective performance, 2% in turnover perceptions, 34% in job satisfaction, 24% in organizational commitment, and 26% in role perception outcomes. Beyond motivational characteristics, social characteristics explained incremental variances of 9% of the variance in subjective performance, 24% in turnover intentions, 17% in job satisfaction, 40% in organizational commitment, and 18% in role perception outcomes. Finally, beyond both motivational and social characteristics, work context characteristics explained incremental variances of 4% in job satisfaction and 16% in stress. The results of this study suggest numerous opportunities for the continued development of work design theory and practice.",
"title": ""
},
{
"docid": "2b266ebb64f14c3059938b34d72b8b19",
"text": "Preprocessing is an important task and critical step in information retrieval and text mining. The objective of this study is to analyze the effect of preprocessing methods in text classification on Turkish texts. We compiled two large datasets from Turkish newspapers using a crawler. On these compiled data sets and using two additional datasets, we perform a detailed analysis of preprocessing methods such as stemming, stopword filtering and word weighting for Turkish text classification on several different Turkish datasets. We report the results of extensive experiments.",
"title": ""
},
{
"docid": "c049f188b31bbc482e16d22a8061abfa",
"text": "SDN deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful deployments. In this paper we measure, report, and explain the performance characteristics of flow table updates in three hardware OpenFlow switches. Our results can help controller developers to make their programs efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.",
"title": ""
},
{
"docid": "d97669811124f3c6f4cef5b2a144a46c",
"text": "Relational databases are queried using database query languages such as SQL. Natural language interfaces to databases (NLIDB) are systems that translate a natural language sentence into a database query. In this modern techno-crazy world, as more and more laymen access various systems and applications through their smart phones and tablets, the need for Natural Language Interfaces (NLIs) has increased manifold. The challenges in Natural language Query processing are interpreting the sentence correctly, removal of various ambiguity and mapping to the appropriate context. Natural language access problem is actually composed of two stages Linguistic processing and Database processing. NLIDB techniques encompass a wide variety of approaches. The approaches include traditional methods such as Pattern Matching, Syntactic Parsing and Semantic Grammar to modern systems such as Intermediate Query Generation, Machine Learning and Ontologies. In this report, various approaches to build NLIDB systems have been analyzed and compared along with their advantages, disadvantages and application areas. Also, a natural language interface to a flight reservation system has been implemented comprising of flight and booking inquiry systems.",
"title": ""
},
{
"docid": "8cd77a6da9be2323ca9fc045079cbd50",
"text": "This paper provides an in-depth view of Terahertz Band (0.1–10 THz) communication, which is envisioned as a key technology to satisfy the increasing demand for higher speed wireless communication. THz Band communication will alleviate the spectrum scarcity and capacity limitations of current wireless systems, and enable new applications both in classical networking domains as well as in novel nanoscale communication paradigms. In this paper, the device design and development challenges for THz Band are surveyed first. The limitations and possible solutions for high-speed transceiver architectures are highlighted. The challenges for the development of new ultra-broadband antennas and very large antenna arrays are explained. When the devices are finally developed, then they need to communicate in the THz band. There exist many novel communication challenges such as propagation modeling, capacity analysis, modulation schemes, and other physical and link layer solutions, in the THz band which can be seen as a new frontier in the communication research. These challenges are treated in depth in this paper explaining the existing plethora of work and what still needs to be tackled. © 2014 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "ec7590c04dc31b1c6065ef4e15148dfc",
"text": "No thesis - no graduation. Academic writing poses manifold challenges to students, instructors and institutions alike. High labor costs, increasing student numbers, and the Bologna Process (which has reduced the period after which undergraduates in Europe submit their first thesis and thus the time available to focus on writing skills) all pose a threat to students’ academic writing abilities. This situation gave rise to the practical goal of this study: to determine if, and to what extent, academic writing and its instruction can be scaled (i.e., designed more efficiently) using a technological solution, in this case Thesis Writer (TW), a domain-specific, online learning environment for the scaffolding of student academic writing, combined with an online editor optimized for producing academic text. Compared to existing automated essay scoring and writing evaluation tools, TW is not focusing on feedback but on instruction, planning, and genre mastery. While most US-based tools, particularly those also used in secondary education, are targeting on the essay genre, TW is tailored to the needs of theses and research article writing (IMRD scheme). This mixed-methods paper reports data of a test run with a first-year course of 102 business administration students. A technology adoption model served as a frame of reference for the research design. From a student’s perspective, problems posed by the task of writing a research proposal as well as the use, usability, and usefulness of TW were studied through an online survey and focus groups (explanatory sequential design). Results seen were positive to highly positive – TW is being used, and has been deemed supportive by students. In particular, it supports the scaling of writing instruction in group assignment settings.",
"title": ""
},
{
"docid": "d7cf6950e58d7971eda60ea7a3b172d9",
"text": "Affect detection is a key component in developing intelligent educational interfaces that are capable of responding to the affective needs of students. In this paper, computer vision and machine learning techniques were used to detect students' affect as they used an educational game designed to teach fundamental principles of Newtonian physics. Data were collected in the real-world environment of a school computer lab, which provides unique challenges for detection of affect from facial expressions (primary channel) and gross body movements (secondary channel) - up to thirty students at a time participated in the class, moving around, gesturing, and talking to each other. Results were cross validated at the student level to ensure generalization to new students. Classification was successful at levels above chance for off-task behavior (area under receiver operating characteristic curve or (AUC = .816) and each affective state including boredom (AUC =.610), confusion (.649), delight (.867), engagement (.679), and frustration (.631) as well as a five-way overall classification of affect (.655), despite the noisy nature of the data. Implications and prospects for affect-sensitive interfaces for educational software in classroom environments are discussed.",
"title": ""
},
{
"docid": "7a99f08b8bcd17e26038ba9af997fe07",
"text": "Type 1 diabetes (T1D) results from the destruction of pancreatic insulin-producing beta cells and is strongly associated with the presence of islet autoantibodies. Autoantibodies to tyrosine phosphatase-like protein IA-2 (IA-2As) are considered to be highly predictive markers of T1D. We developed a novel lateral flow immunoassay (LFIA) based on a bridging format for the rapid detection of IA-2As in human serum samples. In this assay, one site of the IA-2As is bound to HA-tagged-IA-2, which is subsequently captured on the anti-HA-Tag antibody-coated test line on the strip. The other site of the IA-2As is bound to biotinylated IA-2, allowing the complex to be visualized using colloidal gold nanoparticle-conjugated streptavidin. For this study, 35 serum samples from T1D patients and 44 control sera from non-diabetic individuals were analyzed with our novel assay and the results were correlated with two IA-2A ELISAs. Among the 35 serum samples from T1D patients, the IA-2A LFIA, the in-house IA-2A ELISA and the commercial IA-2A ELISA identified as positive 21, 29 and 30 IA-2A-positive sera, respectively. The major advantages of the IA-2A LFIA are its rapidity and simplicity.",
"title": ""
}
] | scidocsrr |
57f990a09435872f0bd68386b04f0149 | Deep Speaker Feature Learning for Text-independent Speaker Verification | [
{
"docid": "7a3aaec6e397b416619bcde0c565b0f6",
"text": "This paper gives an overview of automatic speaker recognition technology, with an emphasis on text-independent recognition. Speaker recognition has been studied actively for several decades. We give an overview of both the classical and the state-of-the-art methods. We start with the fundamentals of automatic speaker recognition, concerning feature extraction and speaker modeling. We elaborate advanced computational techniques to address robustness and session variability. The recent progress from vectors towards supervectors opens up a new area of exploration and represents a technology trend. We also provide an overview of this recent development and discuss the evaluation methodology of speaker recognition systems. We conclude the paper with discussion on future directions. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "935fb5a196358764fda82ac50b87cf1b",
"text": "Linear dimensionality reduction methods, such as LDA, are often used in object recognition for feature extraction, but do not address the problem of how to use these features for recognition. In this paper, we propose Probabilistic LDA, a generative probability model with which we can both extract the features and combine them for recognition. The latent variables of PLDA represent both the class of the object and the view of the object within a class. By making examples of the same class share the class variable, we show how to train PLDA and use it for recognition on previously unseen classes. The usual LDA features are derived as a result of training PLDA, but in addition have a probability model attached to them, which automatically gives more weight to the more discriminative features. With PLDA, we can build a model of a previously unseen class from a single example, and can combine multiple examples for a better representation of the class. We show applications to classification, hypothesis testing, class inference, and clustering, on classes not observed during training.",
"title": ""
},
{
"docid": "83525470a770a036e9c7bb737dfe0535",
"text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.",
"title": ""
}
] | [
{
"docid": "4cb34eda6145a8ea0ccc22b3e547b5e5",
"text": "The factors that contribute to individual differences in the reward value of cute infant facial characteristics are poorly understood. Here we show that the effect of cuteness on a behavioural measure of the reward value of infant faces is greater among women reporting strong maternal tendencies. By contrast, maternal tendencies did not predict women's subjective ratings of the cuteness of these infant faces. These results show, for the first time, that the reward value of infant facial cuteness is greater among women who report being more interested in interacting with infants, implicating maternal tendencies in individual differences in the reward value of infant cuteness. Moreover, our results indicate that the relationship between maternal tendencies and the reward value of infant facial cuteness is not due to individual differences in women's ability to detect infant cuteness. This latter result suggests that individual differences in the reward value of infant cuteness are not simply a by-product of low-cost, functionless biases in the visual system.",
"title": ""
},
{
"docid": "8d570c7d70f9003b9d2f9bfa89234c35",
"text": "BACKGROUND\nThe targeting of the prostate-specific membrane antigen (PSMA) is of particular interest for radiotheragnostic purposes of prostate cancer. Radiolabeled PSMA-617, a 1,4,7,10-tetraazacyclododecane-N,N',N'',N'''-tetraacetic acid (DOTA)-functionalized PSMA ligand, revealed favorable kinetics with high tumor uptake, enabling its successful application for PET imaging (68Ga) and radionuclide therapy (177Lu) in the clinics. In this study, PSMA-617 was labeled with cyclotron-produced 44Sc (T 1/2 = 4.04 h) and investigated preclinically for its use as a diagnostic match to 177Lu-PSMA-617.\n\n\nRESULTS\n44Sc was produced at the research cyclotron at PSI by irradiation of enriched 44Ca targets, followed by chromatographic separation. 44Sc-PSMA-617 was prepared under standard labeling conditions at elevated temperature resulting in a radiochemical purity of >97% at a specific activity of up to 10 MBq/nmol. 44Sc-PSMA-617 was evaluated in vitro and compared to the 177Lu- and 68Ga-labeled match, as well as 68Ga-PSMA-11 using PSMA-positive PC-3 PIP and PSMA-negative PC-3 flu prostate cancer cells. In these experiments it revealed similar in vitro properties to that of 177Lu- and 68Ga-labeled PSMA-617. Moreover, 44Sc-PSMA-617 bound specifically to PSMA-expressing PC-3 PIP tumor cells, while unspecific binding to PC-3 flu cells was not observed. The radioligands were investigated with regard to their in vivo properties in PC-3 PIP/flu tumor-bearing mice. 44Sc-PSMA-617 showed high tumor uptake and a fast renal excretion. The overall tissue distribution of 44Sc-PSMA-617 resembled that of 177Lu-PSMA-617 most closely, while the 68Ga-labeled ligands, in particular 68Ga-PSMA-11, showed different distribution kinetics. 44Sc-PSMA-617 enabled distinct visualization of PC-3 PIP tumor xenografts shortly after injection, with increasing tumor-to-background contrast over time while unspecific uptake in the PC-3 flu tumors was not observed.\n\n\nCONCLUSIONS\nThe in vitro characteristics and in vivo kinetics of 44Sc-PSMA-617 were more similar to 177Lu-PSMA-617 than to 68Ga-PSMA-617 and 68Ga-PSMA-11. Due to the almost four-fold longer half-life of 44Sc as compared to 68Ga, a centralized production of 44Sc-PSMA-617 and transport to satellite PET centers would be feasible. These features make 44Sc-PSMA-617 particularly appealing for clinical application.",
"title": ""
},
{
"docid": "69d5af002ebad67099dc9d1793e89aec",
"text": "Deep models that are both effective and explainable are desirable in many settings; prior explainable models have been unimodal, offering either image-based visualization of attention weights or text-based generation of post-hoc justifications. We propose a multimodal approach to explanation, and argue that the two modalities provide complementary explanatory strengths. We collect two new datasets to define and evaluate this task, and propose a novel model which can provide joint textual rationale generation and attention visualization. Our datasets define visual and textual justifications of a classification decision for activity recognition tasks (ACT-X) and for visual question answering tasks (VQA-X). We quantitatively show that training with the textual explanations not only yields better textual justification models, but also better localizes the evidence that supports the decision. We also qualitatively show cases where visual explanation is more insightful than textual explanation, and vice versa, supporting our thesis that multimodal explanation models offer significant benefits over unimodal approaches.",
"title": ""
},
{
"docid": "f8ec274fc83aded74eed231d6723f4fe",
"text": "Sampling is a well-known technique to speed up architectural simulation of long-running workloads while maintaining accurate performance predictions. A number of sampling techniques have recently been developed that extend well-known single-threaded techniques to allow sampled simulation of multi-threaded applications. Unfortunately, prior work is limited to non-synchronizing applications (e.g., server throughput workloads); requires the functional simulation of the entire application using a detailed cache hierarchy which limits the overall simulation speedup potential; leads to different units of work across different processor architectures which complicates performance analysis; or, requires massive machine resources to achieve reasonable simulation speedups. In this work, we propose BarrierPoint, a sampling methodology to accelerate simulation by leveraging globally synchronizing barriers in multi-threaded applications. BarrierPoint collects microarchitecture-independent code and data signatures to determine the most representative inter-barrier regions, called barrierpoints. BarrierPoint estimates total application execution time (and other performance metrics of interest) through detailed simulation of these barrierpoints only, leading to substantial simulation speedups. Barrierpoints can be simulated in parallel, use fewer simulation resources, and define fixed units of work to be used in performance comparisons across processor architectures. Our evaluation of BarrierPoint using NPB and Parsec benchmarks reports average simulation speedups of 24.7× (and up to 866.6×) with an average simulation error of 0.9% and 2.9% at most. On average, BarrierPoint reduces the number of simulation machine resources needed by 78×.",
"title": ""
},
{
"docid": "26508379e41da5e3b38dd944fc9e4783",
"text": "We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually-significant coefficients. We describe three Photobook tools in particular: one that allows search based on grey-level appearance, one that uses 2-D shape, and a third that allows search based on textural properties.",
"title": ""
},
{
"docid": "4d3de2d03431e8f06a5b8b31a784ecaa",
"text": "For medical students, virtual patient dialogue systems can provide useful training opportunities without the cost of employing actors to portray standardized patients. This work utilizes wordand character-based convolutional neural networks (CNNs) for question identification in a virtual patient dialogue system, outperforming a strong wordand characterbased logistic regression baseline. While the CNNs perform well given sufficient training data, the best system performance is ultimately achieved by combining CNNs with a hand-crafted pattern matching system that is robust to label sparsity, providing a 10% boost in system accuracy and an error reduction of 47% as compared to the pattern-matching system alone.",
"title": ""
},
{
"docid": "d22c8390e6ea9ea8c7a84e188cd10ba5",
"text": "BACKGROUND\nNutrition interventions targeted to individuals are unlikely to significantly shift US dietary patterns as a whole. Environmental and policy interventions are more promising for shifting these patterns. We review interventions that influenced the environment through food availability, access, pricing, or information at the point-of-purchase in worksites, universities, grocery stores, and restaurants.\n\n\nMETHODS\nThirty-eight nutrition environmental intervention studies in adult populations, published between 1970 and June 2003, were reviewed and evaluated on quality of intervention design, methods, and description (e.g., sample size, randomization). No policy interventions that met inclusion criteria were found.\n\n\nRESULTS\nMany interventions were not thoroughly evaluated or lacked important evaluation information. Direct comparison of studies across settings was not possible, but available data suggest that worksite and university interventions have the most potential for success. Interventions in grocery stores appear to be the least effective. The dual concerns of health and taste of foods promoted were rarely considered. Sustainability of environmental change was never addressed.\n\n\nCONCLUSIONS\nInterventions in \"limited access\" sites (i.e., where few other choices were available) had the greatest effect on food choices. Research is needed using consistent methods, better assessment tools, and longer durations; targeting diverse populations; and examining sustainability. Future interventions should influence access and availability, policies, and macroenvironments.",
"title": ""
},
{
"docid": "41240dccf91b1a3ea3ec9b12f5e451ce",
"text": "This study applied the concept of online consumer social experiences (OCSEs) to reduce online shopping post-payment dissonance (i.e., dissonance occurring between online payment and product receipt). Two types of OCSEs were developed: indirect social experiences (IDSEs) and virtual social experiences (VSEs). Two studies were conducted, in which 447 college students were enrolled. Study 1 compared the effects of OCSEs and non-OCSEs when online shopping post-payment dissonance occurred. The results indicate that providing consumers affected by online shopping post-payment dissonance with OCSEs reduces dissonance and produces higher satisfaction, higher repurchase intention, and lower complaint intention than when no OCSEs are provided. In addition, consumers’ interpersonal trust (IPT) and susceptibility to interpersonal informational influence (SIII) moderated the positive effects of OCSEs. Study 2 compared the effects of IDSEs and VSEs when online shopping post-payment dissonance occurred. The results sugomputing need for control omputer-mediated communication pprehension gest that the effects of IDSEs and VSEs on satisfaction, repurchase intention, and complaint intention are moderated by consumers’ computing need for control (CNC) and computer-mediated communication apprehension (CMCA). The consumers with high CNC and low CMCA preferred VSEs, whereas the consumers with low CNC and high CMCA preferred IDSEs. The effects of VSEs and IDSEs on consumers with high CNC and CMCA and those with low CNC and CMCA were not significantly different. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6380b60d47e49c9237208d48de9907e4",
"text": "To date, conversations about cloud computing have been dominated by vendors who focus more on technology and less on business value. While it is still not fully agreed as to what components constitute cloud computing technology, some examples of its potential uses are emerging. We identify seven cloud capabilities that executives can use to formulate cloud-based strategies. Firms can change the mix of these capabilities to develop cloud strategies for unique competitive benefits. We predict that cloud strategies will lead to more intense ecosystem-based competition; it is therefore imperative that companies prepare for such a future now.",
"title": ""
},
{
"docid": "5ec4451889beb4698c6ffb6fba4a53a3",
"text": "We survey recent work on the elliptic curve discrete logarithm problem. In particular we review index calculus algorithms using summation polynomials, and claims about their complexity.",
"title": ""
},
{
"docid": "79041480e35083e619bd804423459f2b",
"text": "Dynamic pricing is the dynamic adjustment of prices to consumers depending upon the value these customers attribute to a product or service. Today’s digital economy is ready for dynamic pricing; however recent research has shown that the prices will have to be adjusted in fairly sophisticated ways, based on sound mathematical models, to derive the benefits of dynamic pricing. This article attempts to survey different models that have been used in dynamic pricing. We first motivate dynamic pricing and present underlying concepts, with several examples, and explain conditions under which dynamic pricing is likely to succeed. We then bring out the role of models in computing dynamic prices. The models surveyed include inventory-based models, data-driven models, auctions, and machine learning. We present a detailed example of an e-business market to show the use of reinforcement learning in dynamic pricing.",
"title": ""
},
{
"docid": "bca33101885391147e411898026c0269",
"text": "The algorithmic stock trading has developed exponentially in the past years, while the automatism of the technical analysis was the main research are for implementing the algorithms. This paper proposes a model for a trading algorithm that combines the signals from different technical indicators in order to provide more accurate trading signals.",
"title": ""
},
{
"docid": "8bd3f52cfbeca614887fe1cbe92798ec",
"text": "This paper introduces a new supervised segmentation algorithm for remotely sensed hyperspectral image data which integrates the spectral and spatial information in a Bayesian framework. A multinomial logistic regression (MLR) algorithm is first used to learn the posterior probability distributions from the spectral information, using a subspace projection method to better characterize noise and highly mixed pixels. Then, contextual information is included using a multilevel logistic Markov-Gibbs Markov random field prior. Finally, a maximum a posteriori segmentation is efficiently computed by the min-cut-based integer optimization algorithm. The proposed segmentation approach is experimentally evaluated using both simulated and real hyperspectral data sets, exhibiting state-of-the-art performance when compared with recently introduced hyperspectral image classification methods. The integration of subspace projection methods with the MLR algorithm, combined with the use of spatial-contextual information, represents an innovative contribution in the literature. This approach is shown to provide accurate characterization of hyperspectral imagery in both the spectral and the spatial domain.",
"title": ""
},
{
"docid": "c2a297417553cb46fd98353d8b8351ac",
"text": "Recent advances in methods and techniques enable us to develop an interactive overlay to the global map of science based on aggregated citation relations among the 9,162 journals contained in the Science Citation Index and Social Science Citation Index 2009 combined. The resulting mapping is provided by VOSViewer. We first discuss the pros and cons of the various options: cited versus citing, multidimensional scaling versus spring-embedded algorithms, VOSViewer versus Gephi, and the various clustering algorithms and similarity criteria. Our approach focuses on the positions of journals in the multidimensional space spanned by the aggregated journal-journal citations. A number of choices can be left to the user, but we provide default options reflecting our preferences. Some examples are also provided; for example, the potential of using this technique to assess the interdisciplinarity of organizations and/or document sets.",
"title": ""
},
{
"docid": "f10eb96de9181085e249fdca1f4a568d",
"text": "This paper argues that tracking, object detection, and model building are all similar activities. We describe a fully automatic system that builds 2D articulated models known as pictorial structures from videos of animals. The learned model can be used to detect the animal in the original video - in this sense, the system can be viewed as a generalized tracker (one that is capable of modeling objects while tracking them). The learned model can be matched to a visual library; here, the system can be viewed as a video recognition algorithm. The learned model can also be used to detect the animal in novel images - in this case, the system can be seen as a method for learning models for object recognition. We find that we can significantly improve the pictorial structures by augmenting them with a discriminative texture model learned from a texture library. We develop a novel texture descriptor that outperforms the state-of-the-art for animal textures. We demonstrate the entire system on real video sequences of three different animals. We show that we can automatically track and identify the given animal. We use the learned models to recognize animals from two data sets; images taken by professional photographers from the Corel collection, and assorted images from the Web returned by Google. We demonstrate quite good performance on both data sets. Comparing our results with simple baselines, we show that, for the Google set, we can detect, localize, and recover part articulations from a collection demonstrably hard for object recognition",
"title": ""
},
{
"docid": "ee75e43f0bd61dd215299a188cecb2ed",
"text": "The ultra low-latency operations of communications and computing enable many potential IoT applications, and thus have gained widespread attention recently. Existing mobile devices and telecommunication systems may not be able to provide the highly desired low-latency computing and communications services. To meet the needs of those applications, we introduce the Fog-Radio Access Network (F-RAN) architecture, which brings the efficient computing capability of the cloud to the edge of the network. By distributing computing-intensive tasks to multiple F-RAN nodes, F-RAN has the potential to meet the requirements of those ultra low-latency applications. In this article, we first introduce the F-RAN and its rationale in serving ultra low-latency applications. Then we discuss the need for a service framework for F-RAN to cope with the complex tradeoff among performance, computing cost, and communication cost. Finally, we illustrate the mobile AR service as an exemplary scenario to provide insights for the design of the framework. Examples and numerical results show that ultra low-latency services can be achieved by the F-RAN by properly handling the tradeoff.",
"title": ""
},
{
"docid": "120007860a5fbf6a3bbc9b2fe6074b87",
"text": "For the last few decades, optimization has been developing at a fast rate. Bio-inspired optimization algorithms are metaheuristics inspired by nature. These algorithms have been applied to solve different problems in engineering, economics, and other domains. Bio-inspired algorithms have also been applied in different branches of information technology such as networking and software engineering. Time series data mining is a field of information technology that has its share of these applications too. In previous works we showed how bio-inspired algorithms such as the genetic algorithms and differential evolution can be used to find the locations of the breakpoints used in the symbolic aggregate approximation of time series representation, and in another work we showed how we can utilize the particle swarm optimization, one of the famous bio-inspired algorithms, to set weights to the different segments in the symbolic aggregate approximation representation. In this paper we present, in two different approaches, a new meta optimization process that produces optimal locations of the breakpoints in addition to optimal weights of the segments. The experiments of time series classification task that we conducted show an interesting example of how the overfitting phenomenon, a frequently encountered problem in data mining which happens when the model overfits the training set, can interfere in the optimization process and hide the superior performance of an optimization algorithm.",
"title": ""
},
{
"docid": "4e368af438658472eb2d7e3db118f61b",
"text": "Radiological diagnosis of acetabular retroversion is based on the presence of the cross-over sign (COS), the posterior wall sign (PWS), and prominence of the ischial spine (PRISS). The primary purpose of the study was to correlate the quantitative cross-over sign with the presence or absence of the PRISS and PWS signs. The hypothesis was that both, PRISS and PWS are associated with a higher cross-over sign ratio or higher amount of acetabular retroversion. A previous study identified 1417 patients with a positive acetabular cross-over sign. Among these, three radiological parameters were assessed: (1) the amount of acetabular retroversion, quantified as a cross-over sign ratio; (2) the presence of the PRISS sign; (3) the presence of the PWS sign. The relation of these three parameters was analysed using Fisher's exact test, ANOVA, and linear regression analysis. In hips with cross-over sign, the PRISS was present in 61.7%. A direct association between PRISS and the cross-over sign ratio (p < 0.001) was seen. The PWS was positive in 31% of the hips and was also significantly related with the cross-over sign ratio (p < 0.001). In hips with a PRISS, 39.7% had a PWS sign, which was a significant relation (p < 0.001). In patients with positive PWS, 78.8% of the cases also had a PRISS (p < 0.001). Both the PRISS and PWS signs were significantly associated with higher grade cross-over values. Both the PRISS and PWS signs as well as the coexistence of COS, PRISS, and PWS are significantly associated with higher grade of acetabular retroversion. In conjunction with the COS, the PRISS and PWS signs indicate severe acetabular retroversion. Presence and recognition of distinct radiological signs around the hip joint might raise the awareness of possible femoroacetabular impingement (FAI).",
"title": ""
},
{
"docid": "cebfc5224413c5acb7831cbf29ae5a8e",
"text": "Radio Frequency (RF) Energy Harvesting holds a pro mising future for generating a small amount of electrical power to drive partial circuits in wirelessly communicating electronics devices. Reducing power consumption has become a major challenge in wireless sensor networks. As a vital factor affecting system cost and lifetime, energy consumption in wireless sensor networks is an emerging and active res arch area. This chapter presents a practical approach for RF Energy harvesting and man agement of the harvested and available energy for wireless sensor networks using the Impro ved Energy Efficient Ant Based Routing Algorithm (IEEABR) as our proposed algorithm. The c hapter looks at measurement of the RF power density, calculation of the received power, s torage of the harvested power, and management of the power in wireless sensor networks . The routing uses IEEABR technique for energy management. Practical and real-time implemen tatio s of the RF Energy using PowercastTM harvesters and simulations using the ene rgy model of our Libelium Waspmote to verify the approach were performed. The chapter con cludes with performance analysis of the harvested energy, comparison of IEEABR and other tr aditional energy management techniques, while also looking at open research areas of energy harvesting and management for wireless sensor networks.",
"title": ""
},
{
"docid": "0b9ae0bf6f6201249756d87a56f0005e",
"text": "To reduce energy consumption and wastage, effective energy management at home is key and an integral part of the future Smart Grid. In this paper, we present the design and implementation of Green Home Service (GHS) for home energy management. Our approach addresses the key issues of home energy management in Smart Grid: a holistic management solution, improved device manageability, and an enabler of Demand-Response. We also present the scheduling algorithms in GHS for smart energy management and show the results in simulation studies.",
"title": ""
}
] | scidocsrr |
c6098e292e527976dcc27b9459eab4f0 | A Dynamic Window Neural Network for CCG Supertagging | [
{
"docid": "49a6d062d3e24a8b325e9e7b142d32be",
"text": "Rumelhart, Hinton and Williams [Rumelhart et al. 86] describe a learning procedure for layered networks of deterministic, neuron-like units. This paper describes further research on the learning procedure. We start by describing the units, the way they are connected, the learning procedure, and the extension to iterative nets. We then give an example in which a network learns a set of filters that enable it to discriminate formant-like patterns in the presence of noise. The speed of learning is strongly dependent on the shape of the surface formed by the error measure in \"weight space.\" We give examples of the shape of the error surface for a typical task and illustrate how an acceleration method speeds up descent in weight space. The main drawback of the learning procedure is the way it scales as the size of the task and the network increases. We give some preliminary results on scaling and show how the magnitude of the optimal weight changes depends on the fan-in of the units. Additional results illustrate the effects on learning speed of the amount of interaction between the weights. A variation of the learning procedure that back-propagates desired state information rather than error gradients is developed and compared with the standard procedure. Finally, we discuss the relationship between our iterative networks and the \"analog\" networks described by Hopfield and Tank [Hopfield and Tank 85]. The learning procedure can discover appropriate weights in their kind of network, as well as determine an optimal schedule for varying the nonlinearity of the units during a search.",
"title": ""
},
{
"docid": "59c24fb5b9ac9a74b3f89f74b332a27c",
"text": "This paper addresses the problem of learning to map sentences to logical form, given training data consisting of natural language sentences paired with logical representations of their meaning. Previous approaches have been designed for particular natural languages or specific meaning representations; here we present a more general method. The approach induces a probabilistic CCG grammar that represents the meaning of individual words and defines how these meanings can be combined to analyze complete sentences. We use higher-order unification to define a hypothesis space containing all grammars consistent with the training data, and develop an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a log-linear parsing model. Experiments demonstrate high accuracy on benchmark data sets in four languages with two different meaning representations.",
"title": ""
},
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
}
] | [
{
"docid": "4e42d29a924c6e1e11456255c1f6cba0",
"text": "We present a reformulation of the stochastic optimal control problem in terms of KL divergence minimisation, not only providing a unifying perspective of previous approaches in this area, but also demonstrating that the formalism leads to novel practical approaches to the control problem. Specifically, a natural relaxation of the dual formulation gives rise to exact iterative solutions to the finite and infinite horizon stochastic optimal control problem, while direct application of Bayesian inference methods yields instances of risk sensitive control. We furthermore study corresponding formulations in the reinforcement learning setting and present model free algorithms for problems with both discrete and continuous state and action spaces. Evaluation of the proposed methods on the standard Gridworld and Cart-Pole benchmarks verifies the theoretical insights and shows that the proposed methods improve upon current approaches.",
"title": ""
},
{
"docid": "b17a8e121f865b7143bc2e38fa367b07",
"text": "Radio frequency (r.f.) has been investigated as a means of externally powering miniature and long term implant telemetry systems. Optimum power transfer from the transmitter to the receiving coil is desired for total system efficiency. A seven step design procedure for the transmitting and receiving coils is described based on r.f., coil diameter, coil spacing, load and the number of turns of the coil. An inductance tapping circuit and a voltage doubler circuit have been built in accordance with the design procedure. Experimental results were within the desired total system efficiency ranges of 18% and 23%, respectively. On a étudié la fréquence radio (f.r.) en tant que source extérieure permettant de faire fonctionner les systèmes télémétriques d'implants miniatures à long terme. Afin d'assurer une efficacité totale au système, il est nécessaire d'obtenir un transfert de puissance optimum de l'émetteur à la bobine réceptrice. On donne la description d'une technique de conception en sept temps, fondée sur la fréquence radio, le diamètre de la bobine, l'espacement des spires, la charge et le nombre de tours de la bobine. Un circuit de captage de tension par induction et un circuit doubleur de tension ont été construits conformément à la méthode de conception. Les résultats expérimentaux étaient compris dans les limites d'efficacité totale souhaitable pour le système, soit 18% à 23%, respectivement. Hochfrequenz wurde als Mittel zur externen Energieversorgung von Miniatur und langfristigen Implantat-Telemetriesystemen untersucht. Zur Verwirklichung der höchsten Leistungsfähigkeit braucht das System optimale Energieübertragung von Sendegerät zu Empfangsspule. Ein auf Hochfrequenz beruhendes siebenstufiges Konstruktionssystem für Sende- und Empfangsspulen wird beschrieben, mit Hinweisen über Spulendurchmesser, Spulenanordnung, Ladung und die Anzahl der Wicklungen. Ein Induktionsanzapfstromkreis und ein Spannungsverdoppler wurden dem Konstruktionsverfahren entsprechend gebaut. Versuchsergebnisse lagen im Bereich des gewünschten Systemleistungsgrades von 18% und 23%.",
"title": ""
},
{
"docid": "086f9cbed93553ca00b2afeff1cb8508",
"text": "Rapid advance of location acquisition technologies boosts the generation of trajectory data, which track the traces of moving objects. A trajectory is typically represented by a sequence of timestamped geographical locations. A wide spectrum of applications can benefit from the trajectory data mining. Bringing unprecedented opportunities, large-scale trajectory data also pose great challenges. In this paper, we survey various applications of trajectory data mining, e.g., path discovery, location prediction, movement behavior analysis, and so on. Furthermore, this paper reviews an extensive collection of existing trajectory data mining techniques and discusses them in a framework of trajectory data mining. This framework and the survey can be used as a guideline for designing future trajectory data mining solutions.",
"title": ""
},
{
"docid": "f8c238001bb72ed4f3e1bc2241f22d26",
"text": "The resource management system is the central component of network computing systems. There have been many projects focused on network computing that have designed and implemented resource management systems with a variety of architectures and functionalities. In this paper, we develop a comprehensive taxonomy for describing resource management architectures and use this taxonomy to survey existing resource management implementations in very large-scale network computing systems known as Grids. We use the taxonomy and the survey results to identify architectural approaches that have not been fully explored in the research.",
"title": ""
},
{
"docid": "0281a146c98cce5dd6a8990c4adf5bba",
"text": "We propose a highly efficient and faster Single Image Super-Resolution (SISR) model with Deep Convolutional neural networks (Deep CNN). Deep CNN have recently shown that they have a significant reconstruction performance on single-image super-resolution. The current trend is using deeper CNN layers to improve performance. However, deep models demand larger computation resources and are not suitable for network edge devices like mobile, tablet and IoT devices. Our model achieves state-of-the-art reconstruction performance with at least 10 times lower calculation cost by Deep CNN with Residual Net, Skip Connection and Network in Network (DCSCN). A combination of Deep CNNs and Skip connection layers are used as a feature extractor for image features on both local and global areas. Parallelized 1x1 CNNs, like the one called Network in Network, are also used for image reconstruction. That structure reduces the dimensions of the previous layer’s output for faster computation with less information loss, and make it possible to process original images directly. Also we optimize the number of layers and filters of each CNN to significantly reduce the calculation cost. Thus, the proposed algorithm not only achieves stateof-the-art performance but also achieves faster and more efficient computation. Code is available at https://github.com/jiny2001/dcscn-super-resolution.",
"title": ""
},
{
"docid": "a13d1144c4a719b1d6d5f4f0e645c2e3",
"text": "Array antennas for 77GHz automotive radar application are designed and measured. Linear series-fed patch array (SFPA) antenna is designed for transmitters of middle range radar (MRR) and all the receivers. A planar SFPA based on the linear one and substrate integrated waveguide (SIW) feeding network is proposed for transmitter of long range radar (LRR), which can decline the radiation from feeding network itself. The array antennas are fabricated, both the performances with and without radome of these array antennas are measured. Good agreement between simulation and measurement has been achieved. They can be good candidates for 77GHz automotive application.",
"title": ""
},
{
"docid": "fce925493fc9f7cbbe4c202e5e625605",
"text": "Topic models are a useful and ubiquitous tool for understanding large corpora. However, topic models are not perfect, and for many users in computational social science, digital humanities, and information studies—who are not machine learning experts—existing models and frameworks are often a “take it or leave it” proposition. This paper presents a mechanism for giving users a voice by encoding users’ feedback to topic models as correlations between words into a topic model. This framework, interactive topic modeling (itm), allows untrained users to encode their feedback easily and iteratively into the topic models. Because latency in interactive systems is crucial, we develop more efficient inference algorithms for tree-based topic models. We validate the framework both with simulated and real users.",
"title": ""
},
{
"docid": "01beae2504022968153e73be91d1765d",
"text": "User studies in the music information retrieval and music digital library fields have been gradually increasing in recent years, but large-scale studies that can help detect common user behaviors are still lacking. We have conducted a large-scale user survey in which we asked numerous questions related to users’ music needs, uses, seeking, and management behaviors. In this paper, we present our preliminary findings, specifically focusing on the responses to questions of users’ favorite music related websites/applications and the reasons why they like them. We provide a list of popular music services, as well as an analysis of how these services are used, and what qualities are valued. Our findings suggest several trends in the types of music services people like: an increase in the popularity of music streaming and mobile music consumption, the emergence of new functionality, such as music identification and cloud music services, an appreciation of music videos, serendipitous discovery of music, and customizability, as well as users’ changing expectations of particular types of music information.",
"title": ""
},
{
"docid": "44ea81d223e3c60c7b4fd1192ca3c4ba",
"text": "Existing classification and rule learning algorithms in machine learning mainly use heuristic/greedy search to find a subset of regularities (e.g., a decision tree or a set of rules) in data for classification. In the past few years, extensive research was done in the database community on learning rules using exhaustive search under the name of association rule mining. The objective there is to find all rules in data that satisfy the user-specified minimum support and minimum confidence. Although the whole set of rules may not be used directly for accurate classification, effective and efficient classifiers have been built using the rules. This paper aims to improve such an exhaustive search based classification system CBA (Classification Based on Associations). The main strength of this system is that it is able to use the most accurate rules for classification. However, it also has weaknesses. This paper proposes two new techniques to deal with these weaknesses. This results in remarkably accurate classifiers. Experiments on a set of 34 benchmark datasets show that on average the new techniques reduce the error of CBA by 17% and is superior to CBA on 26 of the 34 datasets. They reduce the error of the decision tree classifier C4.5 by 19%, and improve performance on 29 datasets. Similar good results are also achieved against the existing classification systems, RIPPER, LB and a Naïve-Bayes",
"title": ""
},
{
"docid": "2b1a9bc5ae7e9e6c2d2d008e2a2384b5",
"text": "Network information distribution is a fundamental service for any anonymization network. Even though anonymization and information distribution about the network are two orthogonal issues, the design of the distribution service has a direct impact on the anonymization. Requiring each node to know about all other nodes in the network (as in Tor and AN.ON -- the most popular anonymization networks) limits scalability and offers a playground for intersection attacks. The distributed designs existing so far fail to meet security requirements and have therefore not been accepted in real networks.\n In this paper, we combine probabilistic analysis and simulation to explore DHT-based approaches for distributing network information in anonymization networks. Based on our findings we introduce NISAN, a novel approach that tries to scalably overcome known security problems. It allows for selecting nodes uniformly at random from the full set of all available peers, while each of the nodes has only limited knowledge about the network. We show that our scheme has properties similar to a centralized directory in terms of preventing malicious nodes from biasing the path selection. This is done, however, without requiring to trust any third party. At the same time our approach provides high scalability and adequate performance. Additionally, we analyze different design choices and come up with diverse proposals depending on the attacker model. The proposed combination of security, scalability, and simplicity, to the best of our knowledge, is not available in any other existing network information distribution system.",
"title": ""
},
{
"docid": "4936f1c5dfa5da581c4bcaf147050041",
"text": "With the popularity of social networks, such as mi-croblogs and Twitter, topic inference for short text is increasingly significant and essential for many content analysis tasks. Biterm topic model (BTM) is superior to conventional topic models in uncovering latent semantic relevance for short text. However, Gibbs sampling employed by BTM is very time consuming when inferring topics, especially for large-scale datasets. It requires O{K) operations per sample for K topics, where K denotes the number of topics in the corpus. In this paper, we propose an acceleration algorithm of BTM, FastBTM, using an efficient sampling method for BTM which only requires O(1) amortized time while the traditional ones scale linearly with the number of topics. FastBTM is based on Metropolis-Hastings and alias method, both of which have been widely adopted in latent Dirichlet allocation (LDA) model and achieved outstanding speedup. We carry out a number of experiments on Tweets2011 Collection dataset and Enron dataset, indicating that our method is robust enough for both short texts and normal documents. Our work can be approximately 9 times faster than traditional Gibbs sampling method per iteration, when setting K = 1000. The source code of FastBTM can be obtained from https://github.com/paperstudy/FastBTM.",
"title": ""
},
{
"docid": "b7bf7d430e4132a4d320df3a155ee74c",
"text": "We present Wave menus, a variant of multi-stroke marking menus designed for improving the novice mode of marking while preserving their efficiency in the expert mode of marking. Focusing on the novice mode, a criteria-based analysis of existing marking menus motivates the design of Wave menus. Moreover a user experiment is presented that compares four hierarchical marking menus in novice mode. Results show that Wave and compound-stroke menus are significantly faster and more accurate than multi-stroke menus in novice mode, while it has been shown that in expert mode the multi-stroke menus and therefore the Wave menus outperform the compound-stroke menus. Wave menus also require significantly less screen space than compound-stroke menus. As a conclusion, Wave menus offer the best performance for both novice and expert modes in comparison with existing multi-level marking menus, while requiring less screen space than compound-stroke menus.",
"title": ""
},
{
"docid": "b8f23ec8e704ee1cf9dbe6063a384b09",
"text": "The Dirichlet distribution and its compound variant, the Dirichlet-multinomial, are two of the most basic models for proportional data, such as the mix of vocabulary words in a text document. Yet the maximum-likelihood estimate of these distributions is not available in closed-form. This paper describes simple and efficient iterative schemes for obtaining parameter estimates in these models. In each case, a fixed-point iteration and a Newton-Raphson (or generalized Newton-Raphson) iteration is provided. 1 The Dirichlet distribution The Dirichlet distribution is a model of how proportions vary. Let p denote a random vector whose elements sum to 1, so that pk represents the proportion of item k. Under the Dirichlet model with parameter vector α, the probability density at p is p(p) ∼ D(α1, ..., αK) = Γ( ∑ k αk) ∏ k Γ(αk) ∏ k pk k (1) where pk > 0 (2)",
"title": ""
},
{
"docid": "0c9228dd4a65587e43fc6d2d1f0b03ce",
"text": "Secure multi-party computation (MPC) is a technique well suited for privacy-preserving data mining. Even with the recent progress in two-party computation techniques such as fully homomorphic encryption, general MPC remains relevant as it has shown promising performance metrics in real-world benchmarks. Sharemind is a secure multi-party computation framework designed with real-life efficiency in mind. It has been applied in several practical scenarios, and from these experiments, new requirements have been identified. Firstly, large datasets require more efficient protocols for standard operations such as multiplication and comparison. Secondly, the confidential processing of financial data requires the use of more complex primitives, including a secure division operation. This paper describes new protocols in the Sharemind model for secure multiplication, share conversion, equality, bit shift, bit extraction, and division. All the protocols are implemented and benchmarked, showing that the current approach provides remarkable speed improvements over the previous work. This is verified using real-world benchmarks for both operations and algorithms.",
"title": ""
},
{
"docid": "e4e97569f53ddde763f4f28559c96ba6",
"text": "With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.",
"title": ""
},
{
"docid": "f794d4a807a4d69727989254c557d2d1",
"text": "The purpose of this study was to describe the operative procedures and clinical outcomes of a new three-column internal fixation system with anatomical locking plates on the tibial plateau to treat complex three-column fractures of the tibial plateau. From June 2011 to May 2015, 14 patients with complex three-column fractures of the tibial plateau were treated with reduction and internal fixation through an anterolateral approach combined with a posteromedial approach. The patients were randomly divided into two groups: a control group which included seven cases using common locking plates, and an experimental group which included seven cases with a new three-column internal fixation system with anatomical locking plates. The mean operation time of the control group was 280.7 ± 53.7 minutes, which was 215.0 ± 49.1 minutes in the experimental group. The mean intra-operative blood loss of the control group was 692.8 ± 183.5 ml, which was 471.4 ± 138.0 ml in the experimental group. The difference was statistically significant between the two groups above. The differences were not statistically significant between the following mean numbers of the two groups: Rasmussen score immediately after operation; active extension–flexion degrees of knee joint at three and 12 months post-operatively; tibial plateau varus angle (TPA) and posterior slope angle (PA) immediately after operation, at three and at 12 months post-operatively; HSS (The Hospital for Special Surgery) knee-rating score at 12 months post-operatively. All fractures healed. A three-column internal fixation system with anatomical locking plates on tibial plateau is an effective and safe tool to treat complex three-column fractures of the tibial plateau and it is more convenient than the common plate.",
"title": ""
},
{
"docid": "6d5bd23414cc9f61534cddb0987eee5b",
"text": "Open source software has witnessed an exponential growth in the last two decades and it is playing an increasingly important role in many companies and organizations leading to the formation of open source software ecosystems. In this paper we present a quality model that will allow the evaluation of those ecosystems in terms of their relevant quality characteristics such as health or activeness. To design this quality model we started by analysing the quality measures found during the execution of a systematic literature review on open source software ecosystems and, then, we classified and reorganized the set of measures in order to build a solid quality model. Finally, we test the suitability of the constructed quality model using the GNOME ecosystem.",
"title": ""
},
{
"docid": "72b6d3039d8bfd1375bfa426db66ecfd",
"text": "Die kutane Myiasis ist eine temporäre Besiedlung der Haut beim Menschen oder beim Vertebraten durch Fliegenlarven von vor allem zwei Spezies. In Zentral- und Südamerika wird die kutane Myiasis meist durch die Larven der Dermatobia hominis verursacht, in Afrika meist von Larven der Cordylobia sp. Wir beschreiben einen Fall von kutaner Myiasis bei einer Familie, die von einer 3-wöchigen Reise aus Ghana zurückgekehrt war. Die Parasiten (ca. 1–2 cm im Durchmesser und 0,5–1 cm hohe tumorähnliche Schwellungen) wurden vom Rücken des 48-jährigen Mannes, von der Nase, der Schulter und einem Handgelenk seiner 47-jährigen Frau sowie vom Rücken der 14-jährigen Tochter entfernt. Die Parasiten wurden als Larven der Cordylobia antropophaga Fliege indentifiziert. Nach Entfernung der ungefähr 8 mm großen Larven heilten die Läsionen innerhalb von 2 Wochen ohne weitere Therapie. Fälle von kutaner Myiasis beim Menschen sind höchstwahrscheinlich häufiger als angenommen, weil viele nicht diagnostiziert bzw. nicht veröffentlicht werden. Da Reisen in tropische und suptropische Gebiete aber immer häufiger werden, sollten Kliniker und Labors bei Furunkel-ähnlichen Läsionen auch an die Möglichkeit einer solchen Cordylobia-Myiasis denken. Dies gilt vor allem für Reisende, die aus dem tropischen Afrika zurückkehren. Cutaneous myiasis is a temporary parasitic infestation of the skin of human and other vertebrates by fly larvae, primarily species of the flies Dermatobia and Cordylobia. In Central and South America cutaneous myiasis is mainly caused by the larvae of Dermatobia hominis; in Africa it is mostly due to the larvae of Cordylobia spp. We describe a case of cutaneous myiasis in a family who returned to Slovenia from a three-week trip to Ghana. The parasites, in tumor-like swellings about 1–2 cm in diameter and 0.5–1 cm high, were removed from the back of the 48-year-old man, the nose, shoulder and wrist of his 47-year-old wife, and the back of their 14-year-old daughter. The parasites were identified as larvae of the fly C. anthropophaga. After removal of the larvae, which were oval-shaped and about 8 mm long, the lesions healed in two weeks without further treatment. Human cases of cutaneous myiasis are most probably underreported because many remain undiagnosed or unpublished. Because of increasing travel to tropical and subtropical areas, clinical and laboratory staff will need to be more alert to the possibility of Cordylobia myiasis in patients with furuncle-like lesions, particularly in individuals who have recently returned from tropical Africa.",
"title": ""
},
{
"docid": "b1ae52dfa5ed1bb9c835816ca3fd52b4",
"text": "The use of the halide-sensitive fluorescent probes (6-methoxy-N-(-sulphopropyl)quinolinium (SPQ) and N-(ethoxycarbonylmethyl)-6-methoxyquinolinium bromide (MQAE)) to measure chloride transport in cells has now been established as an alternative to the halide-selective electrode technique, radioisotope efflux assays and patch-clamp electrophysiology. We report here procedures for the assessment of halide efflux, using SPQ/MQAE halide-sensitive fluorescent indicators, from both adherent cultured epithelial cells and freshly obtained primary human airway epithelial cells. The procedure describes the calculation of efflux rate constants using experimentally derived SPQ/MQAE fluorescence intensities and empirically derived Stern-Volmer calibration constants. These fluorescence methods permit the quantitative analysis of CFTR function.",
"title": ""
},
{
"docid": "4a0c2ad7f07620fa5ea5a97a68672131",
"text": "The Philadelphia Neurodevelopmental Cohort (PNC) is a large-scale, NIMH funded initiative to understand how brain maturation mediates cognitive development and vulnerability to psychiatric illness, and understand how genetics impacts this process. As part of this study, 1445 adolescents ages 8-21 at enrollment underwent multimodal neuroimaging. Here, we highlight the conceptual basis for the effort, the study design, and the measures available in the dataset. We focus on neuroimaging measures obtained, including T1-weighted structural neuroimaging, diffusion tensor imaging, perfusion neuroimaging using arterial spin labeling, functional imaging tasks of working memory and emotion identification, and resting state imaging of functional connectivity. Furthermore, we provide characteristics regarding the final sample acquired. Finally, we describe mechanisms in place for data sharing that will allow the PNC to become a freely available public resource to advance our understanding of normal and pathological brain development.",
"title": ""
}
] | scidocsrr |
ab1949ce70ded63ab2c218c5b557c221 | Deep Recurrent Neural Networks for Human Activity Recognition | [
{
"docid": "6dfc558d273ec99ffa7dc638912d272c",
"text": "Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.",
"title": ""
},
{
"docid": "ddc18f2d129d95737b8f0591560d202d",
"text": "A variety of real-life mobile sensing applications are becoming available, especially in the life-logging, fitness tracking and health monitoring domains. These applications use mobile sensors embedded in smart phones to recognize human activities in order to get a better understanding of human behavior. While progress has been made, human activity recognition remains a challenging task. This is partly due to the broad range of human activities as well as the rich variation in how a given activity can be performed. Using features that clearly separate between activities is crucial. In this paper, we propose an approach to automatically extract discriminative features for activity recognition. Specifically, we develop a method based on Convolutional Neural Networks (CNN), which can capture local dependency and scale invariance of a signal as it has been shown in speech recognition and image recognition domains. In addition, a modified weight sharing technique, called partial weight sharing, is proposed and applied to accelerometer signals to get further improvements. The experimental results on three public datasets, Skoda (assembly line activities), Opportunity (activities in kitchen), Actitracker (jogging, walking, etc.), indicate that our novel CNN-based approach is practical and achieves higher accuracy than existing state-of-the-art methods.",
"title": ""
},
{
"docid": "efa20ddb621568b4e3a590a72d1e762c",
"text": "The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.",
"title": ""
},
{
"docid": "d46594f40795de0feef71b480a53553f",
"text": "Feed-forward, Deep neural networks (DNN)-based text-tospeech (TTS) systems have been recently shown to outperform decision-tree clustered context-dependent HMM TTS systems [1, 4]. However, the long time span contextual effect in a speech utterance is still not easy to accommodate, due to the intrinsic, feed-forward nature in DNN-based modeling. Also, to synthesize a smooth speech trajectory, the dynamic features are commonly used to constrain speech parameter trajectory generation in HMM-based TTS [2]. In this paper, Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis. Experimental results show that a hybrid system of DNN and BLSTM-RNN, i.e., lower hidden layers with a feed-forward structure which is cascaded with upper hidden layers with a bidirectional RNN structure of LSTM, can outperform either the conventional, decision tree-based HMM, or a DNN TTS system, both objectively and subjectively. The speech trajectory generated by the BLSTM-RNN TTS is fairly smooth and no dynamic constraints are needed.",
"title": ""
}
] | [
{
"docid": "e9047e59f58e71404107b065e584c547",
"text": "Dermoscopic skin images are often obtained with different imaging devices, under varying acquisition conditions. In this work, instead of attempting to perform intensity and color normalization, we propose to leverage computational color constancy techniques to build an artificial data augmentation technique suitable for this kind of images. Specifically, we apply the shades of gray color constancy technique to color-normalize the entire training set of images, while retaining the estimated illuminants. We then draw one sample from the distribution of training set illuminants and apply it on the normalized image. We employ this technique for training two deep convolutional neural networks for the tasks of skin lesion segmentation and skin lesion classification, in the context of the ISIC 2017 challenge and without using any external dermatologic image set. Our results on the validation set are promising, and will be supplemented with extended results on the hidden test set when available.",
"title": ""
},
{
"docid": "0c0d46af8cbb0486d12c7d60f72ea715",
"text": "Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions already abound: but are they reliable? This paper will start by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus’s criticism of AI, Searle’s Chinese Room paper, Kurzweil’s predictions in the ‘Age of Spiritual Machines’, and Omohundro’s ‘AI Drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement, and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.//",
"title": ""
},
{
"docid": "781890e1325126fe262a0587b26f9b6b",
"text": "We evaluate the character-level translation method for neural semantic parsing on a large corpus of sentences annotated with Abstract Meaning Representations (AMRs). Using a sequence-tosequence model, and some trivial preprocessing and postprocessing of AMRs, we obtain a baseline accuracy of 53.1 (F-score on AMR-triples). We examine five different approaches to improve this baseline result: (i) reordering AMR branches to match the word order of the input sentence increases performance to 58.3; (ii) adding part-of-speech tags (automatically produced) to the input shows improvement as well (57.2); (iii) So does the introduction of super characters (conflating frequent sequences of characters to a single character), reaching 57.4; (iv) optimizing the training process by using pre-training and averaging a set of models increases performance to 58.7; (v) adding silver-standard training data obtained by an off-the-shelf parser yields the biggest improvement, resulting in an F-score of 64.0. Combining all five techniques leads to an F-score of 71.0 on holdout data, which is state-of-the-art in AMR parsing. This is remarkable because of the relative simplicity of the approach.",
"title": ""
},
{
"docid": "207b24c58d8417fc309a42e3bbd6dc16",
"text": "This study mainly remarks the efficiency of black-box modeling capacity of neural networks in the case of forecasting soccer match results, and opens up several debates on the nature of prediction and selection of input parameters. The selection of input parameters is a serious problem in soccer match prediction systems based on neural networks or statistical methods. Several input vector suggestions are implemented in literature which is mostly based on direct data from weekly charts. Here in this paper, two different input vector parameters have been tested via learning vector quantization networks in order to emphasize the importance of input parameter selection. The input vector parameters introduced in this study are plain and also meaningful when compared to other studies. The results of different approaches presented in this study are compared to each other, and also compared with the results of other neural network approaches and statistical methods in order to give an idea about the successful prediction performance. The paper is concluded with discussions about the nature of soccer match forecasting concept that may draw the interests of researchers willing to work in this area.",
"title": ""
},
{
"docid": "b68da205eb9bf4a6367250c6f04d2ad4",
"text": "Trends change rapidly in today’s world, prompting this key question: What is the mechanism behind the emergence of new trends? By representing real-world dynamic systems as complex networks, the emergence of new trends can be symbolized by vertices that “shine.” That is, at a specific time interval in a network’s life, certain vertices become increasingly connected to other vertices. This process creates new high-degree vertices, i.e., network stars. Thus, to study trends, we must look at how networks evolve over time and determine how the stars behave. In our research, we constructed the largest publicly available network evolution dataset to date, which contains 38,000 real-world networks and 2.5 million graphs. Then, we performed the first precise wide-scale analysis of the evolution of networks with various scales. Three primary observations resulted: (a) links are most prevalent among vertices that join a network at a similar time; (b) the rate that new vertices join a network is a central factor in molding a network’s topology; and (c) the emergence of network stars (high-degree vertices) is correlated with fast-growing networks. We applied our learnings to develop a flexible network-generation model based on large-scale, real-world data. This model gives a better understanding of how stars rise and fall within networks, and is applicable to dynamic systems both in nature and society. Multimedia Links I Video I Interactive Data Visualization I Data I Code Tutorials",
"title": ""
},
{
"docid": "dda021771ca1b1e3c56d978149fb30c3",
"text": "Intelligent interaction between humans and computers has been a dream of artificial intelligence since the beginning of digital era and one of the original motivations behind the creation of artificial intelligence. A key step towards the achievement of such an ambitious goal is to enable the Question Answering systems understand the information need of the user. In this thesis, we attempt to enable the QA system’s ability to understand the user’s information need by three approaches. First, an clarification question generation method is proposed to help the user clarify the information need and bridge information need gap between QA system and the user. Next, a translation based model is obtained from the large archives of Community Question Answering data, to model the information need behind a question and boost the performance of question recommendation. Finally, a fine-grained classification framework is proposed to enable the systems to recommend answered questions based on information need satisfaction.",
"title": ""
},
{
"docid": "be9c234d05dc6f6b2afafa05b3337cf4",
"text": "There has been much research on various aspects of Approximate Query Processing (AQP), such as different sampling strategies, error estimation mechanisms, and various types of data synopses. However, many subtle challenges arise when building an actual AQP engine that can be deployed and used by real world applications. These subtleties are often ignored (or at least not elaborated) by the theoretical literature and academic prototypes alike. For the first time to the best of our knowledge, in this article, we focus on these subtle challenges that one must address when designing an AQP system. Our intention for this article is to serve as a handbook listing critical design choices that database practitioners must be aware of when building or using an AQP system, not to prescribe a specific solution to each challenge.",
"title": ""
},
{
"docid": "1a65b9d35bce45abeefe66882dcf4448",
"text": "Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinating technology which, among others, provides compelling properties about data integrity. Using the blockchain to face data integrity threats seems to be a natural choice, but its current limitations of low throughput, high latency, and weak stability hinder the practical feasibility of any blockchain-based solutions. In this paper, by focusing on a case study from the European SUNFISH project, which concerns the design of a secure by-design cloud federation platform for the public sector, we precisely delineate the actual data integrity needs of cloud computing environments and the research questions to be tackled to adopt blockchain-based databases. First, we detail the open research questions and the difficulties inherent in addressing them. Then, we outline a preliminary design of an effective blockchain-based database for cloud computing environments.",
"title": ""
},
{
"docid": "c61a39f0ba3f24f10c5edd8ad39c7a20",
"text": "REINFORCEMENT LEARNING AND ITS APPLICATION TO CONTROL",
"title": ""
},
{
"docid": "60bb725cf5f0923101949fc11e93502a",
"text": "An important ability of cognitive systems is the ability to familiarize themselves with the properties of objects and their environment as well as to develop an understanding of the consequences of their own actions on physical objects. Developing developmental approaches that allow cognitive systems to familiarize with objects in this sense via guided self-exploration is an important challenge within the field of developmental robotics. In this paper we present a novel approach that allows cognitive systems to familiarize themselves with the properties of objects and the effects of their actions on them in a self-exploration fashion. Our approach is inspired by developmental studies that hypothesize that infants have a propensity to systematically explore the connection between own actions and their perceptual consequences in order to support inter-modal calibration of their bodies. We propose a reinforcement-based approach operating in a continuous state space in which the function predicting cumulated future rewards is learned via a deep Q-network. We investigate the impact of the structure of rewards, the impact of different regularization approaches as well as the impact of different exploration strategies.",
"title": ""
},
{
"docid": "ed33b5fae6bc0af64668b137a3a64202",
"text": "In this study the effect of the Edmodo social learning environment on mobile assisted language learning (MALL) was examined by seeking the opinions of students. Using a quantitative experimental approach, this study was conducted by conducting a questionnaire before and after using the social learning network Edmodo. Students attended lessons with their mobile devices. The course materials were shared in the network via Edmodo group sharing tools. The students exchanged idea and developed projects, and felt as though they were in a real classroom setting. The students were also able to access various multimedia content. The results of the study indicate that Edmodo improves students’ foreign language learning, increases their success, strengthens communication among students, and serves as an entertaining learning environment for them. The educationally suitable sharing structure and the positive user opinions described in this study indicate that Edmodo is also usable in other lessons. Edmodo can be used on various mobile devices, including smartphones, in addition to the web. This advantageous feature contributes to the usefulness of Edmodo as a scaffold for education.",
"title": ""
},
{
"docid": "6a33013c19dc59d8871e217461d479e9",
"text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.",
"title": ""
},
{
"docid": "4c788138dd1b390c059bb9156cd54941",
"text": "We introduce second-order vector representations of words, induced from nearest neighborhood topological features in pre-trained contextual word embeddings. We then analyze the effects of using second-order embeddings as input features in two deep natural language processing models, for named entity recognition and recognizing textual entailment, as well as a linear model for paraphrase recognition. Surprisingly, we find that nearest neighbor information alone is sufficient to capture most of the performance benefits derived from using pre-trained word embeddings. Furthermore, second-order embeddings are able to handle highly heterogeneous data better than first-order representations, though at the cost of some specificity. Additionally, augmenting contextual embeddings with second-order information further improves model performance in some cases. Due to variance in the random initializations of word embeddings, utilizing nearest neighbor features from multiple first-order embedding samples can also contribute to downstream performance gains. Finally, we identify intriguing characteristics of second-order embedding spaces for further research, including much higher density and different semantic interpretations of cosine similarity.",
"title": ""
},
{
"docid": "e86f1f37eac7c2182c5f77c527d8fac6",
"text": "Eating members of one's own species is one of the few remaining taboos in modern human societies. In humans, aggression cannibalism has been associated with mental illness. The objective of this report is to examine the unique set of circumstances and characteristics revealing the underlying etiology leading to such an act and the type of psychological effect it has for the perpetrator. A case report of a patient with paranoid schizophrenia who committed patricide and cannibalism is presented. The psychosocial implications of anthropophagy on the particular patient management are outlined.",
"title": ""
},
{
"docid": "0182e6dcf7c8ec981886dfa2586a0d5d",
"text": "MOTIVATION\nMetabolomics is a post genomic technology which seeks to provide a comprehensive profile of all the metabolites present in a biological sample. This complements the mRNA profiles provided by microarrays, and the protein profiles provided by proteomics. To test the power of metabolome analysis we selected the problem of discrimating between related genotypes of Arabidopsis. Specifically, the problem tackled was to discrimate between two background genotypes (Col0 and C24) and, more significantly, the offspring produced by the crossbreeding of these two lines, the progeny (whose genotypes would differ only in their maternally inherited mitichondia and chloroplasts).\n\n\nOVERVIEW\nA gas chromotography--mass spectrometry (GCMS) profiling protocol was used to identify 433 metabolites in the samples. The metabolomic profiles were compared using descriptive statistics which indicated that key primary metabolites vary more than other metabolites. We then applied neural networks to discriminate between the genotypes. This showed clearly that the two background lines can be discrimated between each other and their progeny, and indicated that the two progeny lines can also be discriminated. We applied Euclidean hierarchical and Principal Component Analysis (PCA) to help understand the basis of genotype discrimination. PCA indicated that malic acid and citrate are the two most important metabolites for discriminating between the background lines, and glucose and fructose are two most important metabolites for discriminating between the crosses. These results are consistant with genotype differences in mitochondia and chloroplasts.",
"title": ""
},
{
"docid": "e3b92d76bb139d0601c85416e8afaca4",
"text": "Conventional supervised object recognition methods have been investigated for many years. Despite their successes, there are still two suffering limitations: (1) various information of an object is represented by artificial features only derived from RGB images, (2) lots of manually labeled data is required by supervised learning. To address those limitations, we propose a new semi-supervised learning framework based on RGB and depth (RGB-D) images to improve object recognition. In particular, our framework has two modules: (1) RGB and depth images are represented by convolutional-recursive neural networks to construct high level features, respectively, (2) co-training is exploited to make full use of unlabeled RGB-D instances due to the existing two independent views. Experiments on the standard RGB-D object dataset demonstrate that our method can compete against with other state-of-the-art methods with only 20% labeled data.",
"title": ""
},
{
"docid": "6cc203d16e715cbd71efdeca380f3661",
"text": "PURPOSE\nTo determine a population-based estimate of communication disorders (CDs) in children; the co-occurrence of intellectual disability (ID), autism, and emotional/behavioral disorders; and the impact of these conditions on the prevalence of CDs.\n\n\nMETHOD\nSurveillance targeted 8-year-olds born in 1994 residing in 2002 in the 3 most populous counties in Utah (n = 26,315). A multiple-source record review was conducted at all major health and educational facilities.\n\n\nRESULTS\nA total of 1,667 children met the criteria of CD. The prevalence of CD was estimated to be 63.4 per 1,000 8-year-olds (95% confidence interval = 60.4-66.2). The ratio of boys to girls was 1.8:1. Four percent of the CD cases were identified with an ID and 3.7% with autism spectrum disorders (ASD). Adjusting the CD prevalence to exclude ASD and/or ID cases significantly affected the CD prevalence rate. Other frequently co-occurring emotional/behavioral disorders with CD were attention deficit/hyperactivity disorder, anxiety, and conduct disorder.\n\n\nCONCLUSIONS\nFindings affirm that CDs and co-occurring mental health conditions are a major educational and public health concern.",
"title": ""
},
{
"docid": "b266ab6e6a0fd75fb3d97b25970cab99",
"text": "a r t i c l e i n f o Keywords: Customer relationship management CRM Customer relationship performance Information technology Marketing capabilities Social media technology This study examines how social media technology usage and customer-centric management systems contribute to a firm-level capability of social customer relationship management (CRM). Drawing from the literature in marketing, information systems, and strategic management, the first contribution of this study is the conceptu-alization and measurement of social CRM capability. The second key contribution is the examination of how social CRM capability is influenced by both customer-centric management systems and social media technologies. These two resources are found to have an interactive effect on the formation of a firm-level capability that is shown to positively relate to customer relationship performance. The study analyzes data from 308 organizations using a structural equation modeling approach. Much like marketing managers in the late 1990s through early 2000s, who participated in the widespread deployment of customer relationship management (CRM) technologies, today's managers are charged with integrating nascent technologies – namely, social media applications – with existing systems and processes to develop new capabilities that foster stronger relationships with customers. This merger of existing CRM systems with social media technology has given way to a new concept of CRM that incorporates a more collaborative and network-focused approach to managing customer relationships. The term social CRM has recently emerged to describe this new way of developing and maintaining customer relationships (Greenberg, 2010). Marketing scholars have defined social CRM as the integration of customer-facing activities, including processes, systems, and technologies, with emergent social media applications to engage customers in collaborative conversations and enhance customer relationships (Greenberg, 2010; Trainor, 2012). Organizations are recognizing the potential of social CRM and have made considerable investments in social CRM technology over the past two years. According to Sarner et al. (2011), spending in social CRM technology increased by more than 40% in 2010 and is expected to exceed $1 billion by 2013. Despite the current hype surrounding social media applications, the efficacy of social CRM technology remains largely unknown and underexplored. Several questions remain unanswered, such as: 1) Can social CRM increase customer retention and loyalty? 2) How do social CRM technologies contribute to firm outcomes? 3) What role is played by CRM processes and technologies? As a result, companies are largely left to experiment with their social application implementations (Sarner et al., 2011), and they …",
"title": ""
},
{
"docid": "8b3431783f1dc699be1153ad80348d3e",
"text": "Quality Function Deployment (QFD) was conceived in Japan in the late 1960's, and introduced to America and Europe in 1983. This paper will provide a general overview of the QFD methodology and approach to product development. Once familiarity with the tool is established, a real-life application of the technique will be provided in a case study. The case study will illustrate how QFD was used to develop a new tape product and provide counsel to those that may want to implement the QFD process. Quality function deployment (QFD) is a “method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process.”",
"title": ""
},
{
"docid": "9b1f40687d0c9b78efdf6d1e19769294",
"text": "The ideal cell type to be used for cartilage therapy should possess a proven chondrogenic capacity, not cause donor-site morbidity, and should be readily expandable in culture without losing their phenotype. There are several cell sources being investigated to promote cartilage regeneration: mature articular chondrocytes, chondrocyte progenitors, and various stem cells. Most recently, stem cells isolated from joint tissue, such as chondrogenic stem/progenitors from cartilage itself, synovial fluid, synovial membrane, and infrapatellar fat pad (IFP) have gained great attention due to their increased chondrogenic capacity over the bone marrow and subcutaneous adipose-derived stem cells. In this review, we first describe the IFP anatomy and compare and contrast it with other adipose tissues, with a particular focus on the embryological and developmental aspects of the tissue. We then discuss the recent advances in IFP stem cells for regenerative medicine. We compare their properties with other stem cell types and discuss an ontogeny relationship with other joint cells and their role on in vivo cartilage repair. We conclude with a perspective for future clinical trials using IFP stem cells.",
"title": ""
}
] | scidocsrr |
c916a926992a6bb405d068ff46b736a2 | Searching Trajectories by Regions of Interest | [
{
"docid": "4d4219d8e4fd1aa86724f3561aea414b",
"text": "Trajectory search has long been an attractive and challenging topic which blooms various interesting applications in spatial-temporal databases. In this work, we study a new problem of searching trajectories by locations, in which context the query is only a small set of locations with or without an order specified, while the target is to find the k Best-Connected Trajectories (k-BCT) from a database such that the k-BCT best connect the designated locations geographically. Different from the conventional trajectory search that looks for similar trajectories w.r.t. shape or other criteria by using a sample query trajectory, we focus on the goodness of connection provided by a trajectory to the specified query locations. This new query can benefit users in many novel applications such as trip planning.\n In our work, we firstly define a new similarity function for measuring how well a trajectory connects the query locations, with both spatial distance and order constraint being considered. Upon the observation that the number of query locations is normally small (e.g. 10 or less) since it is impractical for a user to input too many locations, we analyze the feasibility of using a general-purpose spatial index to achieve efficient k-BCT search, based on a simple Incremental k-NN based Algorithm (IKNN). The IKNN effectively prunes and refines trajectories by using the devised lower bound and upper bound of similarity. Our contributions mainly lie in adapting the best-first and depth-first k-NN algorithms to the basic IKNN properly, and more importantly ensuring the efficiency in both search effort and memory usage. An in-depth study on the adaption and its efficiency is provided. Further optimization is also presented to accelerate the IKNN algorithm. Finally, we verify the efficiency of the algorithm by extensive experiments.",
"title": ""
}
] | [
{
"docid": "ce7000befa45746d7cc8cf8c2ffb3246",
"text": "The quantity of text information published in Arabic language on the net requires the implementation of effective techniques for the extraction and classifying of relevant information contained in large corpus of texts. In this paper we presented an implementation of an enhanced k-NN Arabic text classifier. We apply the traditional k-NN and Naive Bayes from Weka Toolkit for comparison purpose. Our proposed modified k-NN algorithm features an improved decision rule to skip the classes that are less similar and identify the right class from k nearest neighbours which increases the accuracy. The study evaluates the improved decision rule technique using the standard of recall, precision and f-measure as the basis of comparison. We concluded that the effectiveness of the proposed classifier is promising and outperforms the classical k-NN classifier.",
"title": ""
},
{
"docid": "9677d364752d50160557bd8e9dfa0dfb",
"text": "a Junior Research Group of Primate Sexual Selection, Department of Reproductive Biology, German Primate Center Courant Research Center ‘Evolution of Social Behavior’, Georg-August-Universität, Germany c Junior Research Group of Primate Kin Selection, Department of Primatology, Max-Planck-Institute for Evolutionary Anthropology, Germany d Institute of Biology, Faculty of Bioscience, Pharmacy and Psychology, University of Leipzig, Germany e Faculty of Veterinary Medicine, Bogor Agricultural University, Indonesia",
"title": ""
},
{
"docid": "6cf1bcb5396a096a9bcb69186292060a",
"text": "Existing feature-based recommendation methods incorporate auxiliary features about users and/or items to address data sparsity and cold start issues. They mainly consider features that are organized in a flat structure, where features are independent and in a same level. However, auxiliary features are often organized in rich knowledge structures (e.g. hierarchy) to describe their relationships. In this paper, we propose a novel matrix factorization framework with recursive regularization -- ReMF, which jointly models and learns the influence of hierarchically-organized features on user-item interactions, thus to improve recommendation accuracy. It also provides characterization of how different features in the hierarchy co-influence the modeling of user-item interactions. Empirical results on real-world data sets demonstrate that ReMF consistently outperforms state-of-the-art feature-based recommendation methods.",
"title": ""
},
{
"docid": "8da45338656d4cd92a09ce7c1fdc3353",
"text": "Revelations over the past couple of years highlight the importance of understanding malicious and surreptitious weakening of cryptographic systems. We provide an overview of this domain, using a number of historical examples to drive development of a weaknesses taxonomy. This allows comparing different approaches to sabotage. We categorize a broader set of potential avenues for weakening systems using this taxonomy, and discuss what future research is needed to provide sabotage-resilient cryptography.",
"title": ""
},
{
"docid": "f657ec927e0cd39d06428dc3ee37e5e2",
"text": "Muscle hernias of the lower leg involving the tibialis anterior, peroneus brevis, and lateral head of the gastrocnemius were found in three different patients. MRI findings allowed recognition of herniated muscle in all cases and identification of fascial defect in two of them. MR imaging findings and the value of dynamic MR imaging is emphasized.",
"title": ""
},
{
"docid": "544cdcd97568a61e4a02a3ea37d6a0b5",
"text": "In this paper, we describe a data-driven approach to leverage repositories of 3D models for scene understanding. Our ability to relate what we see in an image to a large collection of 3D models allows us to transfer information from these models, creating a rich understanding of the scene. We develop a framework for auto-calibrating a camera, rendering 3D models from the viewpoint an image was taken, and computing a similarity measure between each 3D model and an input image. We demonstrate this data-driven approach in the context of geometry estimation and show the ability to find the identities, poses and styles of objects in a scene. The true benefit of 3DNN compared to a traditional 2D nearest-neighbor approach is that by generalizing across viewpoints, we free ourselves from the need to have training examples captured from all possible viewpoints. Thus, we are able to achieve comparable results using orders of magnitude less data, and recognize objects from never-before-seen viewpoints. In this work, we describe the 3DNN algorithm and rigorously evaluate its performance for the tasks of geometry estimation and object detection/segmentation, as well as two novel applications: affordance estimation and photorealistic object insertion.",
"title": ""
},
{
"docid": "f9c78846a862930470899f62efffa6a8",
"text": "For fast and accurate motion of 6-axis articulated robot, more noble motion control strategy is needed. In general, the movement strategy of industrial robots can be divided into two kinds, PTP (Point to Point) and CP (Continuous Path). In recent, industrial robots which should be co-worked with machine tools are increasingly needed for performing various jobs, as well as simple handling or welding. Therefore, in order to cope with high-speed handling of the cooperation of industrial robots with machine tools or other devices, CP should be implemented so as to reduce vibration and noise, as well as decreasing operation time. This paper will realize CP motion (especially joint-linear) blending in 3-dimensional space for a 6-axis articulated (lab-manufactured) robot (called as “RS2”) by using LabVIEW® [6] programming, based on a parametric interpolation. Another small contribution of this paper is the proposal of motion blending simulation technique based on Recurdyn® V7, in order to figure out whether the joint-linear blending motion can generate the stable motion of robot in the sense of velocity magnitude at the end-effector of robot or not. In order to evaluate the performance of joint-linear motion blending, simple PTP (i.e., linear-linear) is also physically implemented on RS2. The implementation results of joint-linear motion blending and PTP are compared in terms of vibration magnitude and travel time by using the vibration testing equipment of Medallion of Zonic®. It can be confirmed verified that the vibration peak of joint-linear motion blending has been reduced to 1/10, compared to that of PTP.",
"title": ""
},
{
"docid": "21f6ca062098c0dcf04fe8fadfc67285",
"text": "The Key study in this paper is to begin the investigation process with the initial forensic analysis in the segments of the storage media which would definitely contain the digital forensic evidences. These Storage media Locations is referred as the Windows registry. Identifying the forensic evidence from windows registry may take less time than required in the case of all locations of a storage media. Our main focus in this research will be to study the registry structure of Windows 7 and identify the useful information within the registry keys of windows 7 that may be extremely useful to carry out any task of digital forensic analysis. The main aim is to describe the importance of the study on computer & digital forensics. The Idea behind the research is to implement a forensic tool which will be very useful in extracting the digital evidences and present them in usable form to a forensic investigator. The work includes identifying various events registry keys value such as machine last shut down time along with machine name, List of all the wireless networks that the computer has connected to; List of the most recently used files or applications, List of all the USB devices that have been attached to the computer and many more. This work aims to point out the importance of windows forensic analysis to extract and identify the hidden information which shall act as an evidence tool to track and gather the user activities pattern. All Research was conducted in a Windows 7 Environment. Keywords—Windows Registry, Windows 7 Forensic Analysis, Windows Registry Structure, Analysing Registry Key, Digital Forensic Identification, Forensic data Collection, Examination of Windows Registry, Decoding of Windows Registry Keys, Discovering User Activities Patterns, Computer Forensic Investigation Tool.",
"title": ""
},
{
"docid": "8cdd4a8910467974dc7cfee30f6f570b",
"text": "This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails.",
"title": ""
},
{
"docid": "c7f944e3c31fbb45dcd83252b43f73ff",
"text": "The moderation of content in many social media systems, such as Twitter and Facebook, motivated the emergence of a new social network system that promotes free speech, named Gab. Soon after that, Gab has been removed from Google Play Store for violating the company's hate speech policy and it has been rejected by Apple for similar reasons. In this paper we characterize Gab, aiming at understanding who are the users who joined it and what kind of content they share in this system. Our findings show that Gab is a very politically oriented system that hosts banned users from other social networks, some of them due to possible cases of hate speech and association with extremism. We provide the first measurement of news dissemination inside a right-leaning echo chamber, investigating a social media where readers are rarely exposed to content that cuts across ideological lines, but rather are fed with content that reinforces their current political or social views.",
"title": ""
},
{
"docid": "35dd6675e287b5e364998ee138677032",
"text": "Focussed structured document retrieval employs the concept of best entry points (BEPs), which are intended to provide optimal starting-points from which users can browse to relevant document components. This paper describes two small-scale studies, using experimental data from the Shakespeare user study, which developed and evaluated different approaches to the problem of automatic identification of BEPs.",
"title": ""
},
{
"docid": "31555a5981fd234fe9dce3ed47f690f2",
"text": "An accredited biennial 2012 study by the Association of Certified Fraud Examiners claims that on average 5% of a company’s revenue is lost because of unchecked fraud every year. The reason for such heavy losses are that it takes around 18 months for a fraud to be caught and audits catch only 3% of the actual fraud. This begs the need for better tools and processes to be able to quickly and cheaply identify potential malefactors. In this paper, we describe a robust tool to identify procurement related fraud/risk, though the general design and the analytical components could be adapted to detecting fraud in other domains. Besides analyzing standard transactional data, our solution analyzes multiple public and private data sources leading to wider coverage of fraud types than what generally exists in the marketplace. Moreover, our approach is more principled in the sense that the learning component, which is based on investigation feedback has formal guarantees. Though such a tool is ever evolving, an initial deployment of this tool over the past 6 months has found many interesting cases from compliance risk and fraud point of view, increasing the number of true positives found by over 80% compared with other state-of-the-art tools that the domain experts were previously using.",
"title": ""
},
{
"docid": "ca267729b10d10abdd529d002d679e3a",
"text": "Software development organizations are increasingly interested in the possibility of adopting agile development methods. Organizations that have been employing the Capability Maturity Model (CMM/CMMI) for making improvements are now changing their software development processes towards agility. By deploying agile methods, these organizations are making an investment the success of which needs to be proven. However, CMMI does not always support interpretations in an agile context. Consequently, assessments should be implemented in a manner that takes the agile context into account, while still producing useful results. This paper proposes an approach for agile software development assessment using CMMI and describes how this approach was used for software process improvement purposes in organizations that had either been planning to use or were using agile software development methods.",
"title": ""
},
{
"docid": "6897a459e95ac14772de264545970726",
"text": "There is a need for a system which provides real-time local environmental data in rural crop fields for the detection and management of fungal diseases. This paper presents the design of an Internet of Things (IoT) system consisting of a device capable of sending real-time environmental data to cloud storage and a machine learning algorithm to predict environmental conditions for fungal detection and prevention. The stored environmental data on conditions such as air temperature, relative air humidity, wind speed, and rain fall is accessed and processed by a remote computer for analysis and management purposes. A machine learning algorithm using Support Vector Machine regression (SVMr) was developed to process the raw data and predict short-term (day-to-day) air temperature, relative air humidity, and wind speed values to assist in predicting the presence and spread of harmful fungal diseases through the local crop field. Together, the environmental data and environmental predictions made easily accessible by this IoT system will ultimately assist crop field managers by facilitating better management and prevention of fungal disease spread.",
"title": ""
},
{
"docid": "5b73fd2439e02906349f3afe2c2e331c",
"text": "This paper presents a varactor-based power divider with reconfigurable power-dividing ratio and reconfigurable in-phase or out-of-phase phase relation between outputs. By properly controlling the tuning varactors, the power divider can be either in phase or out of phase and each with a wide range of tunable power-dividing ratio. The proposed microstrip power divider was prototyped and experimentally characterized. Measured and simulated results are in good agreement.",
"title": ""
},
{
"docid": "8e6a83df0235cd6e27fbc14abb61c5fc",
"text": "The management of postprandial hyperglycemia is an important strategy in the control of diabetes mellitus and complications associated with the disease, especially in the diabetes type 2. Therefore, inhibitors of carbohydrate hydrolyzing enzymes can be useful in the treatment of diabetes and medicinal plants can offer an attractive strategy for the purpose. Vaccinium arctostaphylos leaves are considered useful for the treatment of diabetes mellitus in some countries. In our research for antidiabetic compounds from natural sources, we found that the methanol extract of the leaves of V. arctostaphylos displayed a potent inhibitory activity on pancreatic α-amylase activity (IC50 = 0.53 (0.53 - 0.54) mg/mL). The bioassay-guided fractionation of the extract resulted in the isolation of quercetin as an active α-amylase inhibitor. Quercetin showed a dose-dependent inhibitory effect with IC50 value 0.17 (0.16 - 0.17) mM.",
"title": ""
},
{
"docid": "4213993be9e2cf6d3470c59db20ea091",
"text": "The virtual instrument is the main content of instrument technology nowadays. This article details the implementation process of the virtual oscilloscope. It is designed by LabVIEW graphical programming language. The virtual oscilloscope can achieve waveform display, channel selection, data collection, data reading, writing and storage, spectrum analysis, printing and waveform parameters measurement. It also has a friendly user interface and can be operated conveniently.",
"title": ""
},
{
"docid": "b0593843ce815016a003c60f8f154006",
"text": "This paper introduces a method for acquiring forensic-grade evidence from Android smartphones using open source tools. We investigate in particular cases where the suspect has made use of the smartphone's Wi-Fi or Bluetooth interfaces. We discuss the forensic analysis of four case studies, which revealed traces that were left in the inner structure of three mobile Android devices and also indicated security vulnerabilities. Subsequently, we propose a detailed plan for forensic examiners to follow when dealing with investigations of potential crimes committed using the wireless facilities of a suspect Android smartphone. This method can be followed to perform physical acquisition of data without using commercial tools and then to examine them safely in order to discover any activity associated with wireless communications. We evaluate our method using the Association of Chief Police Officers' (ACPO) guidelines of good practice for computer-based, electronic evidence and demonstrate that it is made up of an acceptable host of procedures for mobile forensic analysis, focused specifically on device Bluetooth and Wi-Fi facilities.",
"title": ""
},
{
"docid": "5780d05a410270bfb3aa6ba511caf3a1",
"text": "We present an extension to Continuous Time Bayesian Networks (CTBN) called Generalized CTBN (GCTBN). The formalism allows one to model continuous time delayed variables (with exponentially distributed transition rates), as well as non delayed or “immediate” variables, which act as standard chance nodes in a Bayesian Network. The usefulness of this kind of model is discussed through an example concerning the reliability of a simple component-based system. The interpretation of GCTBN is proposed in terms of Generalized Stochastic Petri Nets (GSPN); the purpose is twofold: to provide a well-defined semantics for GCTBNin terms of the underlying stochastic process, and to provide an actual mean to perform inference (both prediction and smoothing) on GCTBN.",
"title": ""
},
{
"docid": "0fef8af603b8529d408e95610b981132",
"text": "Leveraging the concept of software-defined network (SDN), the integration of terrestrial 5G and satellite networks brings us lots of benefits. The placement problem of controllers and satellite gateways is of fundamental importance for design of such SDN-enabled integrated network, especially, for the network reliability and latency, since different placement schemes would produce various network performances. To the best of our knowledge, it is an entirely new problem. Toward this end, in this paper, we first explore the satellite gateway placement problem to obtain the minimum average latency. A simulated annealing based approximate solution (SAA), is developed for this problem, which is able to achieve a near-optimal latency. Based on the analysis of latency, we further investigate a more challenging problem, i.e., the joint placement of controllers and gateways, for the maximum network reliability while satisfying the latency constraint. A simulated annealing and clustering hybrid algorithm (SACA) is proposed to solve this problem. Extensive experiments based on real world online network topologies have been conducted and as validated by our numerical results, enumeration algorithms are able to produce optimal results but having extremely long running time, while SAA and SACA can achieve approximate optimal performances with much lower computational complexity.",
"title": ""
}
] | scidocsrr |
62e028e8d34e91c8349c546c10691c9a | unFriendly: Multi-party Privacy Risks in Social Networks | [
{
"docid": "21384ea8d80efbf2440fb09a61b03be2",
"text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.",
"title": ""
}
] | [
{
"docid": "6d77f396fe93efb7acd34c69f7a7b2fa",
"text": "The main intrinsic evaluation for vector space representation has been focused on textual similarity, where the task is to predict how semantically similar two words or sentences are. We propose a novel framework, Story Cloze Evaluator, for evaluating vector representations which goes beyond textual similarity and captures the notion of predicting what should happen next given a context. This evaluation methodology is simple to run, scalable, reproducible by the community, non-subjective, 100% agreeable by human, and challenging to the state-of-theart models, which makes it a promising new framework for further investment of the representation learning community.",
"title": ""
},
{
"docid": "6f3938e2951996d4f41a5fa6e8c71aad",
"text": "Online Social Networks (OSNs), such as Facebook and Twitter, have become an integral part of our daily lives. There are hundreds of OSNs, each with its own focus in that each offers particular services and functionalities. Recent studies show that many OSN users create several accounts on multiple OSNs using the same or different personal information. Collecting all the available data of an individual from several OSNs and fusing it into a single profile can be useful for many purposes. In this paper, we introduce novel machine learning based methods for solving Entity Resolution (ER), a problem for matching user profiles across multiple OSNs. The presented methods are able to match between two user profiles from two different OSNs based on supervised learning techniques, which use features extracted from each one of the user profiles. By using the extracted features and supervised learning techniques, we developed classifiers which can perform entity matching between two profiles for the following scenarios: (a) matching entities across two OSNs; (b) searching for a user by similar name; and (c) de-anonymizing a user’s identity. The constructed classifiers were tested by using data collected from two popular OSNs, Facebook and Xing. We then evaluated the classifiers’ performances using various evaluation measures, such as true and false positive rates, accuracy, and the Area Under the receiver operator Curve (AUC). The constructed classifiers were evaluated and their classification performance measured by AUC was quite remarkable, with an AUC of up to 0.982 and an accuracy of up to 95.9% in identifying user profiles across two OSNs.",
"title": ""
},
{
"docid": "2216f853543186e73b1149bb5a0de297",
"text": "Scaffolds have been utilized in tissue regeneration to facilitate the formation and maturation of new tissues or organs where a balance between temporary mechanical support and mass transport (degradation and cell growth) is ideally achieved. Polymers have been widely chosen as tissue scaffolding material having a good combination of biodegradability, biocompatibility, and porous structure. Metals that can degrade in physiological environment, namely, biodegradable metals, are proposed as potential materials for hard tissue scaffolding where biodegradable polymers are often considered as having poor mechanical properties. Biodegradable metal scaffolds have showed interesting mechanical property that was close to that of human bone with tailored degradation behaviour. The current promising fabrication technique for making scaffolds, such as computation-aided solid free-form method, can be easily applied to metals. With further optimization in topologically ordered porosity design exploiting material property and fabrication technique, porous biodegradable metals could be the potential materials for making hard tissue scaffolds.",
"title": ""
},
{
"docid": "ddeb9251ed726a7b5df687a32b72fa5f",
"text": "Medical visualization is the use of computers to create 3D images from medical imaging data sets, almost all surgery and cancer treatment in the developed world relies on it.Volume visualization techniques includes iso-surface visualization, mesh visualization and point cloud visualization techniques, these techniques have revolutionized medicine. Much of modern medicine relies on the 3D imaging that is possible with magnetic resonance imaging (MRI) scanners, functional magnetic resonance imaging (fMRI)scanners, positron emission tomography (PET) scanners, ultrasound imaging (US) scanners, X-Ray scanners, bio-marker microscopy imaging scanners and computed tomography (CT) scanners, which make 3D images out of 2D slices. The primary goal of this report is the application-oriented optimization of existing volume rendering methods providing interactive frame-rates. Techniques are presented for traditional alpha-blending rendering, surface-shaded display, maximum intensity projection (MIP), and fast previewing with fully interactive parameter control. Different preprocessing strategies are proposed for interactive iso-surface rendering and fast previewing, such as the well-known marching cube algorithm.",
"title": ""
},
{
"docid": "e263a93a9d936a90f4e513ac65317541",
"text": "Neuronal power attenuation or enhancement in specific frequency bands over the sensorimotor cortex, called Event-Related Desynchronization (ERD) or Event-Related Synchronization (ERS), respectively, is a major phenomenon in brain activities involved in imaginary movement of body parts. However, it is known that the nature of motor imagery-related electroencephalogram (EEG) signals is non-stationary and highly timeand frequency-dependent spatial filter, which we call ‘non-homogeneous filter.’ We adaptively select bases of spatial filters over time and frequency. By taking both temporal and spectral features of EEGs in finding a spatial filter into account it is beneficial to be able to consider non-stationarity of EEG signals. In order to consider changes of ERD/ERS patterns over the time–frequency domain, we devise a spectrally and temporally weighted classification method via statistical analysis. Our experimental results on the BCI Competition IV dataset II-a and BCI Competition II dataset IV clearly presented the effectiveness of the proposed method outperforming other competing methods in the literature. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "db8f1de1961f4730e6fc40881f4d0641",
"text": "Non-thrombotic pulmonary embolism has recently been reported as a remote complication of filler injections to correct hollowing in the temporal region. The middle temporal vein (MTV) has been identified as being highly susceptible to accidental injection. The anatomy and tributaries of the MTV were investigated in six soft embalmed cadavers. The MTV was cannulated and injected in both anterograde and retrograde directions in ten additional cadavers using saline and black filler, respectively. The course and tributaries of the MTV were described. Regarding the infusion experiment, manual injection of saline was easily infused into the MTV toward the internal jugular vein, resulting in continuous flow of saline drainage. This revealed a direct channel from the MTV to the internal jugular vein. Assessment of a preventive maneuver during filler injections was effectively performed by pressing at the preauricular venous confluent point against the zygomatic process. Sudden retardation of saline flow from the drainage tube situated in the internal jugular vein was observed when the preauricular confluent point was compressed. Injection of black gel filler into the MTV and the tributaries through the cannulated tube directed toward the eye proved difficult. The mechanism of venous filler emboli in a clinical setting occurs when the MTV is accidentally cannulated. The filler emboli follow the anterograde venous blood stream to the pulmonary artery causing non-thrombotic pulmonary embolism. Pressing of the pretragal confluent point is strongly recommended during temporal injection to help prevent filler complications, but does not totally eliminate complication occurrence. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .",
"title": ""
},
{
"docid": "d36eec03e4fe2d491e22a758c5675c1f",
"text": "The large-scale deployment of modern phishing attacks relies on the automatic exploitation of vulnerable websites in the wild, to maximize profit while hindering attack traceability, detection and blacklisting. To the best of our knowledge, this is the first work that specifically leverages this adversarial behavior for detection purposes. We show that phishing webpages can be accurately detected by highlighting HTML code and visual differences with respect to other (legitimate) pages hosted within a compromised website. Our system, named DeltaPhish, can be installed as part of a web application firewall, to detect the presence of anomalous content on a website after compromise, and eventually prevent access to it. DeltaPhish is also robust against adversarial attempts in which the HTML code of the phishing page is carefully manipulated to evade detection. We empirically evaluate it on more than 5,500 webpages collected in the wild from compromised websites, showing that it is capable of detecting more than 99% of phishing webpages, while only misclassifying less than 1% of legitimate pages. We further show that the detection rate remains higher than 70% even under very sophisticated attacks carefully designed to evade our system. ∗Preprint version of the work accepted for publication at ESORICS 2017.",
"title": ""
},
{
"docid": "b2c265eb287b95bf87ecf38a5a4aa97b",
"text": "Photographs of hazy scenes typically suffer having low contrast and offer a limited visibility of the scene. This article describes a new method for single-image dehazing that relies on a generic regularity in natural images where pixels of small image patches typically exhibit a 1D distribution in RGB color space, known as color-lines. We derive a local formation model that explains the color-lines in the context of hazy scenes and use it for recovering the scene transmission based on the lines' offset from the origin. The lack of a dominant color-line inside a patch or its lack of consistency with the formation model allows us to identify and avoid false predictions. Thus, unlike existing approaches that follow their assumptions across the entire image, our algorithm validates its hypotheses and obtains more reliable estimates where possible.\n In addition, we describe a Markov random field model dedicated to producing complete and regularized transmission maps given noisy and scattered estimates. Unlike traditional field models that consist of local coupling, the new model is augmented with long-range connections between pixels of similar attributes. These connections allow our algorithm to properly resolve the transmission in isolated regions where nearby pixels do not offer relevant information.\n An extensive evaluation of our method over different types of images and its comparison to state-of-the-art methods over established benchmark images show a consistent improvement in the accuracy of the estimated scene transmission and recovered haze-free radiances.",
"title": ""
},
{
"docid": "c96dbf6084741f8b529e8a1de19cf109",
"text": "Metamorphic testing is an advanced technique to test programs without a true test oracle such as machine learning applications. Because these programs have no general oracle to identify their correctness, traditional testing techniques such as unit testing may not be helpful for developers to detect potential bugs. This paper presents a novel system, Kabu, which can dynamically infer properties of methods' states in programs that describe the characteristics of a method before and after transforming its input. These Metamorphic Properties (MPs) are pivotal to detecting potential bugs in programs without test oracles, but most previous work relies solely on human effort to identify them and only considers MPs between input parameters and output result (return value) of a program or method. This paper also proposes a testing concept, Metamorphic Differential Testing (MDT). By detecting different sets of MPs between different versions for the same method, Kabu reports potential bugs for human review. We have performed a preliminary evaluation of Kabu by comparing the MPs detected by humans with the MPs detected by Kabu. Our preliminary results are promising: Kabu can find more MPs than human developers, and MDT is effective at detecting function changes in methods.",
"title": ""
},
{
"docid": "760edd83045a80dbb2231c0ffbef2ea7",
"text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.",
"title": ""
},
{
"docid": "3de7ae48b25c19417b87aea0006ae19a",
"text": "Over the past three or four decades, there have been important advances in the understanding of the actions, exposure-response characteristics, and mechanisms of action of many common air pollutants. A multidisciplinary approach using epidemiology, animal toxicology, and controlled human exposure studies has contributed to the database. This review will emphasize studies of humans but will also draw on findings from the other disciplines. Air pollutants have been shown to cause responses ranging from reversible changes in respiratory symptoms and lung function, changes in airway reactivity and inflammation, structural remodeling of pulmonary airways, and impairment of pulmonary host defenses, to increased respiratory morbidity and mortality. Quantitative and qualitative understanding of the effects of a small group of air pollutants has advanced considerably, but the understanding is by no means complete, and the breadth of effects of all air pollutants is only partially understood.",
"title": ""
},
{
"docid": "14c786d87fc06ab85ad41f6f6c30db21",
"text": "When an attacker tries to penetrate the network, there are many defensive systems, including intrusion detection systems (IDSs). Most IDSs are capable of detecting many attacks, but can not provide a clear idea to the analyst because of the huge number of false alerts generated by these systems. This weakness in the IDS has led to the emergence of many methods in which to deal with these alerts, minimize them and highlight the real attacks. It has come to a stage to take a stock of the research results a comprehensive view so that further research in this area will be motivated objectively to fulfill the gaps",
"title": ""
},
{
"docid": "0d57c3d4067d94f867e7e06becd48519",
"text": "This thesis investigates the evolutionary plausibility of the Minimalist Program. Is such a theory of language reasonable given the assumption that the human linguistic capacity has been subject to the usual forces and processes of evolution? More generally, this thesis is a comment on the manner in which theories of language can and should be constrained. What are the constraints that must be taken into account when constructing a theory of language? These questions are addressed by applying evidence gathered in evolutionary biology to data from linguistics. The development of generative syntactic theorising in the late 20th century has led to a much redesigned conception of the human language faculty. The driving question ‘why is language the way it is?’ has prompted assumptions of simplicity, perfection, optimality, and economy for language; a minimal system operating in an economic fashion to fit into the larger cognitive architecture in a perfect manner. Studies in evolutionary linguistics, on the other hand, have been keen to demonstrate that language is complex, redundant, and adaptive, Pinker & Bloom’s (1990) seminal paper being perhaps the prime example of this. The question is whether these opposing views can be married in any way. Interdisciplinary evidence is brought to bear on this problem, demonstrating that any reconciliation is impossible. Evolutionary biology shows that perfection, simplicity, and economy do not arise in typically evolving systems, yet the Minimalist Program attaches these characteristics to language. It shows that evolvable systems exhibit degeneracy, modularity, and robustness, yet the Minimalist Program must rule these features out for language. It shows that evolution exhibits a trend towards complexity, yet the Minimalist Program excludes such a depiction of language.",
"title": ""
},
{
"docid": "0d2b905bc0d7f117d192a8b360cc13f0",
"text": "We investigate a previously unknown phase of phosphorus that shares its layered structure and high stability with the black phosphorus allotrope. We find the in-plane hexagonal structure and bulk layer stacking of this structure, which we call \"blue phosphorus,\" to be related to graphite. Unlike graphite and black phosphorus, blue phosphorus displays a wide fundamental band gap. Still, it should exfoliate easily to form quasi-two-dimensional structures suitable for electronic applications. We study a likely transformation pathway from black to blue phosphorus and discuss possible ways to synthesize the new structure.",
"title": ""
},
{
"docid": "28235352a7676169b872f67244c8cdc2",
"text": "In 3GPP LTE Release 13, Narrowband Internet of Things (NB-IoT) was standardized for providing wide-area connectivity for massive machine-type communications for IoT. In LTE Release 14, NB-IoT was further developed to deliver enhanced user experience in selected areas through the addition of features such as increased positioning accuracy, increased peak data rates, the introduction of a lower device power class, improved non-anchor carrier operation, multicast, and authorization of coverage enhancements. In this article, we provide an overview of these features introduced for NB-IoT in LTE Release 14. An analysis is given on the applicability of these features and their benefits to enhance the NB-IoT radio access technology.",
"title": ""
},
{
"docid": "6f1da2d00f63cae036db04fd272b8ef2",
"text": "Female genital cosmetic surgery is surgery performed on a woman within a normal range of variation of human anatomy. The issues are heightened by a lack of long-term and substantive evidence-based literature, conflict of interest from personal financial gain through performing these procedures, and confusion around macroethical and microethical domains. It is a source of conflict and controversy globally because the benefit and harm of offering these procedures raise concerns about harmful cultural views, education, and social vulnerability of women with regard to both ethics and human rights. The rights issues of who is defining normal female anatomy and function, as well as the economic vulnerability of women globally, bequeath the profession a greater responsibility to ensure that there is adequate health and general education-not just among patients but broadly in society-that there is neither limitation nor interference in the decision being made, and that there are no psychological disorders that could be influencing such choices.",
"title": ""
},
{
"docid": "8efc308fe9730aca44975ecfb0fa7581",
"text": "We give a survey of the developments in the theory of Backward Stochastic Differential Equations during the last 20 years, including the solutions’ existence and uniqueness, comparison theorem, nonlinear Feynman-Kac formula, g-expectation and many other important results in BSDE theory and their applications to dynamic pricing and hedging in an incomplete financial market. We also present our new framework of nonlinear expectation and its applications to financial risk measures under uncertainty of probability distributions. The generalized form of law of large numbers and central limit theorem under sublinear expectation shows that the limit distribution is a sublinear Gnormal distribution. A new type of Brownian motion, G-Brownian motion, is constructed which is a continuous stochastic process with independent and stationary increments under a sublinear expectation (or a nonlinear expectation). The corresponding robust version of Itô’s calculus turns out to be a basic tool for problems of risk measures in finance and, more general, for decision theory under uncertainty. We also discuss a type of “fully nonlinear” BSDE under nonlinear expectation. Mathematics Subject Classification (2010). 60H, 60E, 62C, 62D, 35J, 35K",
"title": ""
},
{
"docid": "409a7199f73e4dcdffe350e906c03d0f",
"text": "In this letter, we propose a protocol for an automatic food recognition system that identifies the contents of the meal from the images of the food. We developed a multilayered convolutional neural network (CNN) pipeline that takes advantages of the features from other deep networks and improves the efficiency. Numerous traditional handcrafted features and methods are explored, among which CNNs are chosen as the best performing features. Networks are trained and fine-tuned using preprocessed images and the filter outputs are fused to achieve higher accuracy. Experimental results on the largest real-world food recognition database ETH Food-101 and newly contributed Indian food image database demonstrate the effectiveness of the proposed methodology as compared to many other benchmark deep learned CNN frameworks.",
"title": ""
},
{
"docid": "e2f2961ab8c527914c3d23f8aa03e4bf",
"text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
}
] | scidocsrr |
e29efa1679ce80dc8db4812b8a7bacba | Improved Selective Refinement Network for Face Detection | [
{
"docid": "ccfb258fa88118aedbba5fa803808f75",
"text": "Face detection has been well studied for many years and one of remaining challenges is to detect small, blurred and partially occluded faces in uncontrolled environment. This paper proposes a novel contextassisted single shot face detector, named PyramidBox to handle the hard face detection problem. Observing the importance of the context, we improve the utilization of contextual information in the following three aspects. First, we design a novel context anchor to supervise high-level contextual feature learning by a semi-supervised method, which we call it PyramidAnchors. Second, we propose the Low-level Feature Pyramid Network to combine adequate high-level context semantic feature and Low-level facial feature together, which also allows the PyramidBox to predict faces of all scales in a single shot. Third, we introduce a contextsensitive structure to increase the capacity of prediction network to improve the final accuracy of output. In addition, we use the method of Data-anchor-sampling to augment the training samples across different scales, which increases the diversity of training data for smaller faces. By exploiting the value of context, PyramidBox achieves superior performance among the state-of-the-art over the two common face detection benchmarks, FDDB and WIDER FACE. Our code is available in PaddlePaddle: https://github.com/PaddlePaddle/models/tree/develop/ fluid/face_detection.",
"title": ""
}
] | [
{
"docid": "4044d493ac6c38fcb590a7fa5ced84d9",
"text": "Use of sub-design-rule (SDR) thick-gate-oxide MOS structures can significantly improve RF performance. Utilizing 3-stack 3.3-V MOSFET's with an SDR channel length, a 31.3-dBm 900-MHz Bulk CMOS T/R switch with transmit (TX) and receive (RX) insertion losses of 0.5 and 1.0 dB is realized. A 28-dBm 2.4-GHz T/R switch with TX and RX insertion losses of 0.8 and 1.2 dB is also demonstrated. SDR MOS varactors achieve Qmin of ~ 80 at 24 GHz with a tuning range of ~ 40%.",
"title": ""
},
{
"docid": "abb748541b980385e4b8bc477c5adc0e",
"text": "Spin–orbit torque, a torque brought about by in-plane current via the spin–orbit interactions in heavy-metal/ferromagnet nanostructures, provides a new pathway to switch the magnetization direction. Although there are many recent studies, they all build on one of two structures that have the easy axis of a nanomagnet lying orthogonal to the current, that is, along the z or y axes. Here, we present a new structure with the third geometry, that is, with the easy axis collinear with the current (along the x axis). We fabricate a three-terminal device with a Ta/CoFeB/MgO-based stack and demonstrate the switching operation driven by the spin–orbit torque due to Ta with a negative spin Hall angle. Comparisons with different geometries highlight the previously unknown mechanisms of spin–orbit torque switching. Our work offers a new avenue for exploring the physics of spin–orbit torque switching and its application to spintronics devices.",
"title": ""
},
{
"docid": "408122795467ff0247f95a997a1ed90a",
"text": "With the popularity of mobile devices, photo retargeting has become a useful technique that adapts a high-resolution photo onto a low-resolution screen. Conventional approaches are limited in two aspects. The first factor is the de-emphasized role of semantic content that is many times more important than low-level features in photo aesthetics. Second is the importance of image spatial modeling: toward a semantically reasonable retargeted photo, the spatial distribution of objects within an image should be accurately learned. To solve these two problems, we propose a new semantically aware photo retargeting that shrinks a photo according to region semantics. The key technique is a mechanism transferring semantics of noisy image labels (inaccurate labels predicted by a learner like an SVM) into different image regions. In particular, we first project the local aesthetic features (graphlets in this work) onto a semantic space, wherein image labels are selectively encoded according to their noise level. Then, a category-sharing model is proposed to robustly discover the semantics of each image region. The model is motivated by the observation that the semantic distribution of graphlets from images tagged by a common label remains stable in the presence of noisy labels. Thereafter, a spatial pyramid is constructed to hierarchically encode the spatial layout of graphlet semantics. Based on this, a probabilistic model is proposed to enforce the spatial layout of a retargeted photo to be maximally similar to those from the training photos. Experimental results show that (1) noisy image labels predicted by different learners can improve the retargeting performance, according to both qualitative and quantitative analysis, and (2) the category-sharing model stays stable even when 32.36% of image labels are incorrectly predicted.",
"title": ""
},
{
"docid": "26597dea3d011243a65a1d2acdae19e8",
"text": "Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. The bandwidth required to repair the system after a node failure also plays a crucial role in the system performance. In [1] authors have shown that a tradeoff exists between storage and repair bandwidth. They also have introduced the scheme of regenerating codes which meet this tradeoff. In this paper, a scheme of Exact Regenerating Codes is introduced, which are regenerating codes with an additional property of regenerating back the same node upon failure. For the minimum bandwidth point, which is suitable for applications like distributed mail servers, explicit construction for exact regenerating codes is provided. A subspace approach is provided, using which the necessary and sufficient conditions for a linear code to be an exact regenerating code are derived. This leads to the uniqueness of our construction. For the minimum storage point which suits applications such as storage in peer-to-peer systems, an explicit construction of regenerating codes for certain suitable parameters is provided. This code supports variable number of nodes and can handle multiple simultaneous node failures. The constructions given for both the points require a low field size and have low complexity.",
"title": ""
},
{
"docid": "9c349ef0f3a48eaeaf678b8730d4b82c",
"text": "This paper discusses the effectiveness of the EEG signal for human identification using four or less of channels of two different types of EEG recordings. Studies have shown that the EEG signal has biometric potential because signal varies from person to person and impossible to replicate and steal. Data were collected from 10 male subjects while resting with eyes open and eyes closed in 5 separate sessions conducted over a course of two weeks. Features were extracted using the wavelet packet decomposition and analyzed to obtain the feature vectors. Subsequently, the neural networks algorithm was used to classify the feature vectors. Results show that, whether or not the subjects’ eyes were open are insignificant for a 4– channel biometrics system with a classification rate of 81%. However, for a 2–channel system, the P4 channel should not be included if data is acquired with the subjects’ eyes open. It was observed that for 2– channel system using only the C3 and C4 channels, a classification rate of 71% was achieved. Keywords—Biometric, EEG, Wavelet Packet Decomposition, Neural Networks",
"title": ""
},
{
"docid": "8930924a223ef6a8d19e52ab5c6e7736",
"text": "Modern perception systems are notoriously complex, featuring dozens of interacting parameters that must be tuned to achieve good performance. Conventional tuning approaches require expensive ground truth, while heuristic methods are difficult to generalize. In this work, we propose an introspective ground-truth-free approach to evaluating the performance of a generic perception system. By using the posterior distribution estimate generated by a Bayesian estimator, we show that the expected performance can be estimated efficiently and without ground truth. Our simulated and physical experiments in a demonstrative indoor ground robot state estimation application show that our approach can order parameters similarly to using a ground-truth system, and is able to accurately identify top-performing parameters in varying contexts. In contrast, baseline approaches that reason only about observation log-likelihood fail in the face of challenging perceptual phenomena.",
"title": ""
},
{
"docid": "5d398e35d6dc58b56a9257623cb83db0",
"text": "BACKGROUND\nAlthough much has been published with regard to the columella assessed on the frontal and lateral views, a paucity of literature exists regarding the basal view of the columella. The objective of this study was to evaluate the spectrum of columella deformities and devise a working classification system based on underlying anatomy.\n\n\nMETHODS\nA retrospective study was performed of 100 consecutive patients who presented for primary rhinoplasty. The preoperative basal view photographs for each patient were reviewed to determine whether they possessed ideal columellar aesthetics. Patients who had deformity of their columella were further scrutinized to determine the most likely underlying cause of the subsequent abnormality.\n\n\nRESULTS\nOf the 100 patient photographs assessed, only 16 (16 percent) were found to display ideal norms of the columella. The remaining 84 of 100 patients (84 percent) had some form of aesthetic abnormality and were further classified based on the most likely underlying cause. Type 1 deformities (caudal septum and/or spine) constituted 18 percent (18 of 100); type 2 (medial crura), 12 percent (12 of 100); type 3 (soft tissue), 6 percent (six of 100); and type 4 (combination), 48 percent (48 of 100).\n\n\nCONCLUSIONS\nDeformities may be classified according to the underlying cause, with combined deformity being the most common. Use of the herein discussed classification scheme will allow surgeons to approach this region in a comprehensive manner. Furthermore, use of such a system allows for a more standardized approach for surgical treatment.",
"title": ""
},
{
"docid": "aeba4012971d339a9a953a7b86f57eb8",
"text": "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.",
"title": ""
},
{
"docid": "db66428e21d473b7d77fde0c3ae6d6c3",
"text": "In order to improve electric vehicle lead-acid battery charging speed, analysis the feasibility of shortening the charging time used the charge method with negative pulse discharge, presenting the negative pulse parameters determined method for the fast charging with pulse discharge, determined the negative pulse amplitude and negative pulse duration in the pulse charge with negative pulse. Experiments show that the determined parameters with this method has some Advantages such as short charging time, small temperature rise etc, and the method of negative pulse parameters determined can used for different capacity of lead-acid batteries.",
"title": ""
},
{
"docid": "d41d5ed278337cf3138880e628272f2d",
"text": "Technological changes and improved electronic communications seem, paradoxically, to be making cities more, rather than less, important. There is a strong correlation between urbanization and economic development across countries, and within-country evidence suggests that productivity rises in dense agglomerations. But urban economic advantages are often offset by the perennial urban curses of crime, congestion and contagious disease. The past history of the developed world suggests that these problems require more capable governments that use a combination of economic and engineering solutions. Though the scope of urban challenges can make remaining rural seem attractive, agrarian poverty has typically also been quite costly.",
"title": ""
},
{
"docid": "e7ada5ce425b1e814c9e4f958f0f3c11",
"text": "e recent boom of AI has seen the emergence of many humancomputer conversation systems such as Google Assistant, Microso Cortana, Amazon Echo and Apple Siri. We introduce and formalize the task of predicting questions in conversations, where the goal is to predict the new question that the user will ask, given the past conversational context. is task can be modeled as a “sequence matching” problem, where two sequences are given and the aim is to learn a model that maps any pair of sequences to a matching probability. Neural matching models, which adopt deep neural networks to learn sequence representations and matching scores, have aracted immense research interests of information retrieval and natural language processing communities. In this paper, we rst study neural matching models for the question retrieval task that has been widely explored in the literature, whereas the eectiveness of neural models for this task is relatively unstudied. We further evaluate the neural matching models in the next question prediction task in conversations. We have used the publicly available ora data and Ubuntu chat logs in our experiments. Our evaluations investigate the potential of neural matching models with representation learning for question retrieval and next question prediction in conversations. Experimental results show that neural matching models perform well for both tasks.",
"title": ""
},
{
"docid": "bbeb6f28ae02876dcce8a4cf205b6194",
"text": "We propose the design of a programming language for quantum computing. Traditionally, quantum algorithms are frequently expressed at the hardware level, for instance in terms of the quantum circuit model or quantum Turing machines. These approaches do not encourage structured programming or abstractions such as data types. In this paper, we describe the syntax and semantics of a simple quantum programming language with high-level features such as loops, recursive procedures, and structured data types. The language is functional in nature, statically typed, free of run-time errors, and it has an interesting denotational semantics in terms of complete partial orders of superoperators.",
"title": ""
},
{
"docid": "2e2d3ace23e70bf318b13543184aff86",
"text": "Let G “ pVG, EGq be a connected graph. The distance dGpu, vq between vertices u and v in G is the length of a shortest u ́ v path in G. The eccentricity of a vertex v in G is the integer eGpvq “ maxtdGpv, uq : u P VGu. The diameter of G is the integer dpGq “ maxteGpvq : v P VGu. The periphery of a vertex v of G is the set PGpvq “ tu P VG : dGpv, uq “ eGpvqu, while the periphery of G is the set P pGq “ tv P VG : eGpvq “ dpGqu. We say that graph G is hangable if PGpvq ⊆ P pGq for every vertex v of G. In this paper we prove that every block graph is hangable and discuss the hangability of products of graphs.",
"title": ""
},
{
"docid": "97d10e997f09554baa2c34556f49f1bf",
"text": "Computerized generation of humor is a notoriously difficult AI problem. We develop an algorithm called Libitum that helps humans generate humor in a Mad Lib R ©, which is a popular fill-in-the-blank game. The algorithm is based on a machine learned classifier that determines whether a potential fill-in word is funny in the context of the Mad Lib story. We use Amazon Mechanical Turk to create ground truth data and to judge humor for our classifier to mimic, and we make this data freely available. Our testing shows that Libitum successfully aids humans in filling in Mad Libs that are usually judged funnier than those filled in by humans with no computerized help. We go on to analyze why some words are better than others at making a Mad Lib funny.",
"title": ""
},
{
"docid": "6bcfc93a3bee13d2c5416e4cc5663646",
"text": "The choice of an adequate object shape representation is critical for efficient grasping and robot manipulation. A good representation has to account for two requirements: it should allow uncertain sensory fusion in a probabilistic way and it should serve as a basis for efficient grasp and motion generation. We consider Gaussian process implicit surface potentials as object shape representations. Sensory observations condition the Gaussian process such that its posterior mean defines an implicit surface which becomes an estimate of the object shape. Uncertain visual, haptic and laser data can equally be fused in the same Gaussian process shape estimate. The resulting implicit surface potential can then be used directly as a basis for a reach and grasp controller, serving as an attractor for the grasp end-effectors and steering the orientation of contact points. Our proposed controller results in a smooth reach and grasp trajectory without strict separation of phases. We validate the shape estimation using Gaussian processes in a simulation on randomly sampled shapes and the grasp controller on a real robot with 7DoF arm and 7DoF hand.",
"title": ""
},
{
"docid": "ed2ad5cd12eb164a685a60dc0d0d4a06",
"text": "Explainable Recommendation refers to the personalized recommendation algorithms that address the problem of why they not only provide users with the recommendations, but also provide explanations to make the user or system designer aware of why such items are recommended. In this way, it helps to improve the effectiveness, efficiency, persuasiveness, and user satisfaction of recommendation systems. In recent years, a large number of explainable recommendation approaches – especially model-based explainable recommendation algorithms – have been proposed and adopted in real-world systems. In this survey, we review the work on explainable recommendation that has been published in or before the year of 2018. We first highlight the position of explainable recommendation in recommender system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why. We then conduct a comprehensive survey of explainable recommendation itself in terms of three aspects: 1) We provide a chronological research line of explanations in recommender systems, including the user study approaches in the early years, as well as the more recent model-based approaches. 2) We provide a taxonomy for explainable recommendation algorithms, including user-based, item-based, model-based, and post-model explanations. 3) We summarize the application of explainable recommendation in different recommendation tasks, including product recommendation, social recommendation, POI recommendation, etc. We devote a section to discuss the explanation perspectives in the broader IR and machine learning settings, as well as their relationship with explainable recommendation research. We end the survey by discussing potential future research directions to promote the explainable recommendation research area. now Publishers Inc.. Explainable Recommendation: A Survey and New Perspectives. Foundations and Trends © in Information Retrieval, vol. XX, no. XX, pp. 1–87, 2018. DOI: 10.1561/XXXXXXXXXX.",
"title": ""
},
{
"docid": "010926d088cf32ba3fafd8b4c4c0dedf",
"text": "The number and the size of spatial databases, e.g. for geomarketing, traffic control or environmental studies, are rapidly growing which results in an increasing need for spatial data mining. In this paper, we present new algorithms for spatial characterization and spatial trend analysis. For spatial characterization it is important that class membership of a database object is not only determined by its non-spatial attributes but also by the attributes of objects in its neighborhood. In spatial trend analysis, patterns of change of some non-spatial attributes in the neighborhood of a database object are determined. We present several algorithms for these tasks. These algorithms were implemented within a general framework for spatial data mining providing a small set of database primitives on top of a commercial spatial database management system. A performance evaluation using a real geographic database demonstrates the effectiveness of the proposed algorithms. Furthermore, we show how the algorithms can be combined to discover even more interesting spatial knowledge.",
"title": ""
},
{
"docid": "b5b6fc6ce7690ae8e49e1951b08172ce",
"text": "The output voltage derivative term associated with a PID controller injects significant noise in a dc-dc converter. This is mainly due to the parasitic resistance and inductance of the output capacitor. Particularly, during a large-signal transient, noise injection significantly degrades phase margin. Although noise characteristics can be improved by reducing the cutoff frequency of the low-pass filter associated with the voltage derivative, this degrades the closed-loop bandwidth. A formulation of a PID controller is introduced to replace the output voltage derivative with information about the capacitor current, thus reducing noise injection. It is shown that this formulation preserves the fundamental principle of a PID controller and incorporates a load current feedforward, as well as inductor current dynamics. This can be helpful to further improve bandwidth and phase margin. The proposed method is shown to be equivalent to a voltage-mode-controlled buck converter and a current-mode-controlled boost converter with a PID controller in the voltage feedback loop. A buck converter prototype is tested, and the proposed algorithm is implemented using a field-programmable gate array.",
"title": ""
},
{
"docid": "9c00313926a8c625fd15da8708aa941e",
"text": "OBJECTIVE\nThe objective of this study was to evaluate the effect of a dental water jet on plaque biofilm removal using scanning electron microscopy (SEM).\n\n\nMETHODOLOGY\nEight teeth with advanced aggressive periodontal disease were extracted. Ten thin slices were cut from four teeth. Two slices were used as the control. Eight were inoculated with saliva and incubated for 4 days. Four slices were treated using a standard jet tip, and four slices were treated using an orthodontic jet tip. The remaining four teeth were treated with the orthodontic jet tip but were not inoculated with saliva to grow new plaque biofilm. All experimental teeth were treated using a dental water jet for 3 seconds on medium pressure.\n\n\nRESULTS\nThe standard jet tip removed 99.99% of the salivary (ex vivo) biofilm, and the orthodontic jet tip removed 99.84% of the salivary biofilm. Observation of the remaining four teeth by the naked eye indicated that the orthodontic jet tip removed significant amounts of calcified (in vivo) plaque biofilm. This was confirmed by SEM evaluations.\n\n\nCONCLUSION\nThe Waterpik dental water jet (Water Pik, Inc, Fort Collins, CO) can remove both ex vivo and in vivo plaque biofilm significantly.",
"title": ""
},
{
"docid": "67d0e2d74f5b52d70bb194464d5c5b71",
"text": "Mobile phones can provide a number of benefits to older people. However, most mobile phone designs and form factors are targeted at younger people and middle-aged adults. To inform the design of mobile phones for seniors, we ran several participatory activities where seniors critiqued current mobile phones, chose important applications, and built their own imagined mobile phone system. We prototyped this system on a real mobile phone and evaluated the seniors' performance through user tests and a real-world deployment. We found that our participants wanted more than simple phone functions, and instead wanted a variety of application areas. While they were able to learn to use the software with little difficulty, hardware design made completing some tasks frustrating or difficult. Based on our experience with our participants, we offer considerations for the community about how to design mobile devices for seniors and how to engage them in participatory activities.",
"title": ""
}
] | scidocsrr |
1edd666e01785a141c316f7cd0f5e270 | A Chatbot Based On AIML Rules Extracted From Twitter Dialogues | [
{
"docid": "d103d7793a9ff39c43dce47d45742905",
"text": "This paper proposes an architecture for an open-domain conversational system and evaluates an implemented system. The proposed architecture is fully composed of modules based on natural language processing techniques. Experimental results using human subjects show that our architecture achieves significantly better naturalness than a retrieval-based baseline and that its naturalness is close to that of a rule-based system using 149K hand-crafted rules.",
"title": ""
}
] | [
{
"docid": "90f3c2ea17433ee296702cca53511b9e",
"text": "This paper presents the design process, detailed analysis, and prototyping of a novel-structured line-start solid-rotor-based axial-flux permanent-magnet (AFPM) motor capable of autostarting with solid-rotor rings. The preliminary design is a slotless double-sided AFPM motor with four poles for high torque density and stable operation. Two concentric unilevel-spaced raised rings are added to the inner and outer radii of the rotor discs for smooth line-start of the motor. The design allows the motor to operate at both starting and synchronous speeds. The basic equations for the solid rings of the rotor of the proposed AFPM motor are discussed. Nonsymmetry of the designed motor led to its 3-D time-stepping finite-element analysis (FEA) via Vector Field Opera 14.0, which evaluates the design parameters and predicts the transient performance. To verify the design, a prototype 1-hp four-pole three-phase line-start AFPM synchronous motor is built and is used to test the performance in real time. There is a good agreement between experimental and FEA-based computed results. It is found that the prototype motor maintains high starting torque and good synchronization.",
"title": ""
},
{
"docid": "a2845e100c20153f19e32b4e713ebbaa",
"text": "The efficiency of the ground penetrating radar (GPR) system significantly depends on the antenna performance as signal has to propagate through lossy and inhomogeneous media. In this research work a resistively loaded compact Bow-tie antenna which can operate through a wide bandwidth of 4.1 GHz is proposed. The sharp corners of the slot antenna are rounded so as to minimize the end-fire reflections. The proposed antenna employs a resistive loading technique through a thin sheet of graphite to attain the ultra-wide bandwidth. The simulated results obtained from CST Microwave Studio v14 and HFSS v14 show a good amount of agreement for the antenna performance parameters. The proposed antenna has potential to apply for the GPR applications as it provides improved radiation efficiency, enhanced bandwidth, gain, directivity and reduced end-fire reflections.",
"title": ""
},
{
"docid": "86fb01912ab343b95bb31e0b06fff851",
"text": "Serial periodic data exhibit both serial and periodic properties. For example, time continues forward serially, but weeks, months, and years are periods that recur. While there are extensive visualization techniques for exploring serial data, and a few for exploring periodic data, no existing technique simultaneously displays serial and periodic attributes of a data set. We introduce a spiral visualization technique, which displays data along a spiral to highlight serial attributes along the spiral axis and periodic ones along the radii. We show several applications of the spiral visualization to data exploration tasks, present our implementation, discuss the capacity for data analysis, and present findings of our informal study with users in data-rich scientific domains.",
"title": ""
},
{
"docid": "9c97262605b3505bbc33c64ff64cfcd5",
"text": "This essay focuses on possible nonhuman applications of CRISPR/Cas9 that are likely to be widely overlooked because they are unexpected and, in some cases, perhaps even \"frivolous.\" We look at five uses for \"CRISPR Critters\": wild de-extinction, domestic de-extinction, personal whim, art, and novel forms of disease prevention. We then discuss the current regulatory framework and its possible limitations in those contexts. We end with questions about some deeper issues raised by the increased human control over life on earth offered by genome editing.",
"title": ""
},
{
"docid": "6a602e4f48c0eb66161bce46d53f0409",
"text": "In this paper, we propose three metrics for detecting botnets through analyzing their behavior. Our social infrastructure (i.e., the Internet) is currently experiencing the danger of bots' malicious activities as the scale of botnets increases. Although it is imperative to detect botnet to help protect computers from attacks, effective metrics for botnet detection have not been adequately researched. In this work we measure enormous amounts of traffic passing through the Asian Internet Interconnection Initiatives (AIII) infrastructure. To validate the effectiveness of our proposed metrics, we analyze measured traffic in three experiments. The experimental results reveal that our metrics are applicable for detecting botnets, but further research is needed to refine their performance",
"title": ""
},
{
"docid": "3dd732828151a63d090a2633e3e48fac",
"text": "This article shows the potential for convex optimization methods to be much more widely used in signal processing. In particular, automatic code generation makes it easier to create convex optimization solvers that are made much faster by being designed for a specific problem family. The disciplined convex programming framework that has been shown useful in transforming problems to a standard form may be extended to create solvers themselves. Much work remains to be done in exploring the capabilities and limitations of automatic code generation. As computing power increases, and as automatic code generation improves, the authors expect convex optimization solvers to be found more and more often in real-time signal processing applications.",
"title": ""
},
{
"docid": "610629d3891c10442fe5065e07d33736",
"text": "We investigate in this paper deep learning (DL) solutions for prediction of driver's cognitive states (drowsy or alert) using EEG data. We discussed the novel channel-wise convolutional neural network (CCNN) and CCNN-R which is a CCNN variation that uses Restricted Boltzmann Machine in order to replace the convolutional filter. We also consider bagging classifiers based on DL hidden units as an alternative to the conventional DL solutions. To test the performance of the proposed methods, a large EEG dataset from 3 studies of driver's fatigue that includes 70 sessions from 37 subjects is assembled. All proposed methods are tested on both raw EEG and Independent Component Analysis (ICA)-transformed data for cross-session predictions. The results show that CCNN and CCNN-R outperform deep neural networks (DNN) and convolutional neural networks (CNN) as well as other non-DL algorithms and DL with raw EEG inputs achieves better performance than ICA features.",
"title": ""
},
{
"docid": "ebe93eda9810af02812dc4529ac6d651",
"text": "We present three smart contracts that allow a briber to fairly exchange bribes to miners who pursue a mining strategy benefiting the briber. The first contract, CensorshipCon, highlights that Ethereum’s uncle block reward policy can directly subsidise the cost of bribing miners. The second contract, HistoryRevisionCon, rewards miners via an in-band payment for reversing transactions or enforcing a new state of another contract. The third contract, GoldfingerCon, rewards miners in one cryptocurrency for reducing the utility of another cryptocurrency. This work is motivated by the need to understand the extent to which smart contracts can impact the incentive mechanisms involved in Nakamoto-style consensus protocols.",
"title": ""
},
{
"docid": "3d10a5dd2e58608ea6369d2c67e6401e",
"text": "We propose a novel ‘‘e-brush’’ for calligraphy and painting, which meets all the criteria for a good e-brush. We use only four attributes to capture the essential features of the brush, and a suitably powerful modeling metaphor for its behavior. The e-brush s geometry, dynamic motions, and pigment changes are all dealt with in a single model. A single model simplifies the synchronization between the various system modules, thus giving rise to a more stable system, and lower costs. By a careful tradeoff between the complexity of the model and computation efficiency, more elaborate simulation of the e-brush s deformation and its recovery for interactive painterly rendering is made possible. We also propose a novel paper–ink model to complement the brush s model, and a machine intelligence module to empower the user to easily create beautiful calligraphy and painting. Despite the complexity of the modeling behind the scene, the high-level user interface has a simplistic and friendly design. The final results created by our e-brush can rival the real artwork. 2004 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a3d7a0a672e9090072d0e3e7834844a2",
"text": "Hyper spectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. This paper presents hyperspectral image classification mechanism using genetic algorithm with empirical mode decomposition and image fusion used in preprocessing stage. 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, image fusion is performed on the hyperspectral bands to selectively merge the maximum possible features from the source images to form a single image. This fused image is classified using genetic algorithm. Different indices, such as K-means (KMI), Davies-Bouldin Index (DBI), and Xie-Beni Index (XBI) are used as objective functions. This method increases classification accuracy of hyperspectral image.",
"title": ""
},
{
"docid": "8ae21da19b8afabb941bc5bb450434a9",
"text": "A 7-month-old child presented with imperforate anus, penoscrotal hypospadias and transposition, and a midline mucosa-lined perineal mass. At surgery the mass was found to be supplied by the median sacral artery. It was excised and the anorectal malformation was repaired by posterior sagittal anorectoplasty. Histologically the mass revealed well-differentiated colonic tissue. The final diagnosis was well-differentiated sacrococcygeal teratoma in association with anorectal malformation.",
"title": ""
},
{
"docid": "a8695230b065ae2e4c5308dfe4f8c10e",
"text": "The paper describes a solution for the Yandex Personalized Web Search Challenge. The goal of the challenge is to rerank top ten web search query results to bring most personally relevant results on the top, thereby improving the search quality. The paper focuses on feature engineering for learning to rank in web search, including a novel pair-wise feature, shortand long-term personal navigation features. The paper demonstrates that point-wise logistic regression can achieve the stat-of-the-art performance in terms of normalized discounted cumulative gain with capability to scale up.",
"title": ""
},
{
"docid": "324291450811b40a006cc93525443633",
"text": "Many problems in radar and communication signal processing involve radio frequency (RF) signals of very high bandwidth. This presents a serious challenge to systems that might attempt to use a high-rate analog-to-digital converter (ADC) to sample these signals, as prescribed by the Shannon/Nyquist sampling theorem. In these situations, however, the information level of the signal is often far lower than the actual bandwidth, which prompts the question of whether more efficient schemes can be developed for measuring such signals. In this paper we propose a system that uses modulation, filtering, and sampling to produce a low-rate set of digital measurements. Our \"analog-to-information converter\" (AIC) is inspired by the theory of compressive sensing (CS), which states that a discrete signal having a sparse representation in some dictionary can be recovered from a small number of linear projections of that signal. We generalize the CS theory to continuous-time sparse signals, explain our proposed AIC system in the CS context, and discuss practical issues regarding implementation",
"title": ""
},
{
"docid": "6daa1bc00a4701a2782c1d5f82c518e2",
"text": "An 8-year-old Caucasian girl was referred with perineal bleeding of sudden onset during micturition. There was no history of trauma, fever or dysuria, but she had a history of constipation. Family history was unremarkable. Physical examination showed a prepubertal girl with a red ‘doughnut’-shaped lesion surrounding the urethral meatus (figure 1). Laboratory findings, including platelet count and coagulation, were normal. A vaginoscopy, performed using sedation, was negative. Swabs tested negative for sexually transmitted pathogens. A diagnosis of urethral prolapse (UP) was made on clinical appearance. Treatment with topical oestrogen cream was started and constipation treated with oral polyethylene glycol. On day 10, the bleeding stopped, and at week 5 there was a moderate regression of the UP. However, occasional mild bleeding persisted at 10 months, so she was referred to a urologist (figure 2). UP is an eversion of the distal urethral mucosa through the external meatus. It is most commonly seen in postmenopausal women and is uncommon in prepubertal girls. UP is rare in Caucasian children and more common in patients of African descent. 2 It may be asymptomatic or present with bleeding, spotting or urinary symptoms. The exact pathophysiological process of UP is unknown. Increased intra-abdominal pressure with straining, inadequate periurethral supporting tissue, neuromuscular dysfunction and a relative oestrogen deficiency are possible predisposing factors. Differential diagnoses include ureterocele, polyps, tumours and non-accidental injury. 3 Management options include conservative treatments such as tepid water baths and topical oestrogens. Surgery is indicated if bleeding, dysuria or pain persist. 5 Vaginoscopy in this case was possibly unnecessary, as there were no signs of trauma to the perineal area or other concerning signs or history of abuse. In the presence of typical UP, invasive diagnostic procedures should not be considered as first-line investigations and they should be reserved for cases of diagnostic uncertainty.",
"title": ""
},
{
"docid": "714df72467bc3e919b7ea7424883cf26",
"text": "Although a lot of attention has been paid to software cost estimation since 1960, making accurate effort and schedule estimation is still a challenge. To collect evidence and identify potential areas of improvement in software cost estimation, it is important to investigate the estimation accuracy, the estimation method used, and the factors influencing the adoption of estimation methods in current industry. This paper analyzed 112 projects from the Chinese software project benchmarking dataset and conducted questionnaire survey on 116 organizations to investigate the above information. The paper presents the current situations related to software project estimation in China and provides evidence-based suggestions on how to improve software project estimation. Our survey results suggest, e.g., that large projects were more prone to cost and schedule overruns, that most computing managers and professionals were neither satisfied nor dissatisfied with the project estimation, that very few organizations (15%) used model-based methods, and that the high adoption cost and insignificant benefit after adoption were the main causes for low use of model-based methods.",
"title": ""
},
{
"docid": "74fb666c47afc81b8e080f730e0d1fe0",
"text": "In current commercial Web search engines, queries are processed in the conjunctive mode, which requires the search engine to compute the intersection of a number of posting lists to determine the documents matching all query terms. In practice, the intersection operation takes a significant fraction of the query processing time, for some queries dominating the total query latency. Hence, efficient posting list intersection is critical for achieving short query latencies. In this work, we focus on improving the performance of posting list intersection by leveraging the compute capabilities of recent multicore systems. To this end, we consider various coarse-grained and fine-grained parallelization models for list intersection. Specifically, we present an algorithm that partitions the work associated with a given query into a number of small and independent tasks that are subsequently processed in parallel. Through a detailed empirical analysis of these alternative models, we demonstrate that exploiting parallelism at the finest-level of granularity is critical to achieve the best performance on multicore systems. On an eight-core system, the fine-grained parallelization method is able to achieve more than five times reduction in average query processing time while still exploiting the parallelism for high query throughput.",
"title": ""
},
{
"docid": "1303770cf8d0f1b0f312feb49281aa10",
"text": "A terahertz metamaterial absorber (MA) with properties of broadband width, polarization-insensitive, wide angle incidence is presented. Different from the previous methods to broaden the absorption width, this letter proposes a novel combinatorial way which units a nested structure with multiple metal-dielectric layers. We numerically investigate the proposed MA, and the simulation results show that the absorber achieves a broadband absorption over a frequency range of 0.896 THz with the absorptivity greater than 90%. Moreover, the full-width at half maximum of the absorber is up to 1.224 THz which is 61.2% with respect to the central frequency. The mechanism for the broadband absorption originates from the overlapping of longitudinal coupling between layers and coupling of the nested structure. Importantly, the nested structure makes a great contribution to broaden the absorption width. Thus, constructing a nested structure in a multi-layer absorber may be considered as an effective way to design broadband MAs.",
"title": ""
},
{
"docid": "b7a08eaeb69fa6206cb9aec9cc54f2c3",
"text": "This paper describes a computational pragmatic model which is geared towards providing helpful answers to modal and hypothetical questions. The work brings together elements from fonna l . semantic theories on modality m~d question answering, defines a wkler, pragmatically flavoured, notion of answerhood based on non-monotonic inference aod develops a notion of context, within which aspects of more cognitively oriented theories, such as Relevance Theory, can be accommodated. The model has been inlplemented. The research was fundexl by ESRC grant number R000231279.",
"title": ""
},
{
"docid": "8ccf463aa3eae6ed20001b8fad6f94a6",
"text": "Nowadays, contactless payments are becoming increasingly common as new smartphones, tablets, point-of-sale (POS) terminals and payment cards (often termed \"tap-and-pay\" cards) are designed to support Near Field Communication (NFC) technology. However, as NFC technology becomes pervasive, there have been concerns about how well NFC-enabled contactless payment systems protect individuals and organizations from emerging security and privacy threats. In this paper, we examine the security of contactless payment systems by considering the privacy threats and the different adversarial attacks that these systems must defend against. We focus our analysis on the underlying trust assumptions, security measures and technologies that form the basis on which contactless payment cards and NFC-enabled mobile wallets exchange sensitive transaction data with contactless POS terminals. We also explore the EMV and ISO standards for contactless payments and disclose their shortcomings with regards to enforcing security and privacy in contactless payment transactions. Our findings shed light on the discrepancies between the EMV and ISO standards, as well as how card issuing banks and mobile wallet providers configure their contactless payment cards and NFC-enabled mobile wallets based on these standards, respectively. These inconsistencies are disconcerting as they can be exploited by an adversary to compromise the integrity of contactless payment transactions.",
"title": ""
},
{
"docid": "75f5679d9c1bab3585c1bf28d50327d8",
"text": "From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.",
"title": ""
}
] | scidocsrr |
e8caa6cb31ec81ff9786a1d29d470272 | Research Note - Privacy Concerns and Privacy-Protective Behavior in Synchronous Online Social Interactions | [
{
"docid": "10b16932bb8c1d85f759c181da6e5407",
"text": "Many explanations of both proand anti-social behaviors in computer-mediated communication (CMC) appear to hinge on changes in individual self-awareness. In spite of this, little research has been devoted to understanding the effects of self-awareness in CMC. To fill this void, this study examined the effects of individuals public and private self-awareness in anonymous, time-restricted, and synchronous CMC. Two experiments were conducted. A pilot experiment tested and confirmed the effectiveness of using a Web camera combined with an alleged online audience to enhance users public self-awareness. In the main study users private and public self-awareness were manipulated in a crossed 2 · 2 factorial design. Pairs of participants completed a Desert Survival Problem via a synchronous, text-only chat program. After the task, they evaluated each other on intimacy, task/social orientation, formality, politeness, attraction, and group identification. The results suggest that a lack of private and public self-awareness does not automatically lead to impersonal tendencies in CMC as deindividuation perspectives of CMC would argue. Moreover, participants in this study were able to form favorable impressions in a completely anonymous environment based on brief interaction, which lends strong support to the idealization proposed by hyperpersonal theory. Findings are used to modify and extend current theoretical perspectives on CMC. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "a27ebb8015c950992084365d539f565e",
"text": "The art of tiling originated very early in the history of civilization. Almost every known human society has made use of tilings in some form or another. In particular, tilings using only regular polygons have great visual appeal. Decorated regular tilings with continuous and symmetrical patterns were widely used in decoration field, such as mosaics, pavements, and brick walls. In science, these tilings provide inspiration for synthetic organic chemistry. Building on previous CG&A “Beautiful Math” articles, the authors propose an invariant mapping method to create colorful patterns on Archimedean tilings (1-uniform tilings). The resulting patterns simultaneously have global crystallographic symmetry and local cyclic or dihedral symmetry.",
"title": ""
},
{
"docid": "6ee26f725bfb63a6ff72069e48404e68",
"text": "OBJECTIVE\nTo determine which routinely collected exercise test variables most strongly correlate with survival and to derive a fitness risk score that can be used to predict 10-year survival.\n\n\nPATIENTS AND METHODS\nThis was a retrospective cohort study of 58,020 adults aged 18 to 96 years who were free of established heart disease and were referred for an exercise stress test from January 1, 1991, through May 31, 2009. Demographic, clinical, exercise, and mortality data were collected on all patients as part of the Henry Ford ExercIse Testing (FIT) Project. Cox proportional hazards models were used to identify exercise test variables most predictive of survival. A \"FIT Treadmill Score\" was then derived from the β coefficients of the model with the highest survival discrimination.\n\n\nRESULTS\nThe median age of the 58,020 participants was 53 years (interquartile range, 45-62 years), and 28,201 (49%) were female. Over a median of 10 years (interquartile range, 8-14 years), 6456 patients (11%) died. After age and sex, peak metabolic equivalents of task and percentage of maximum predicted heart rate achieved were most highly predictive of survival (P<.001). Subsequent addition of baseline blood pressure and heart rate, change in vital signs, double product, and risk factor data did not further improve survival discrimination. The FIT Treadmill Score, calculated as [percentage of maximum predicted heart rate + 12(metabolic equivalents of task) - 4(age) + 43 if female], ranged from -200 to 200 across the cohort, was near normally distributed, and was found to be highly predictive of 10-year survival (Harrell C statistic, 0.811).\n\n\nCONCLUSION\nThe FIT Treadmill Score is easily attainable from any standard exercise test and translates basic treadmill performance measures into a fitness-related mortality risk score. The FIT Treadmill Score should be validated in external populations.",
"title": ""
},
{
"docid": "14e8006ae1fc0d97e737ff2a5a4d98dd",
"text": "Building dialogue systems that can converse naturally with humans is a challenging yet intriguing problem of artificial intelligence. In open-domain human-computer conversation, where the conversational agent is expected to respond to human utterances in an interesting and engaging way, commonsense knowledge has to be integrated into the model effectively. In this paper, we investigate the impact of providing commonsense knowledge about the concepts covered in the dialogue. Our model represents the first attempt to integrating a large commonsense knowledge base into end-toend conversational models. In the retrieval-based scenario, we propose a model to jointly take into account message content and related commonsense for selecting an appropriate response. Our experiments suggest that the knowledgeaugmented models are superior to their knowledge-free counterparts.",
"title": ""
},
{
"docid": "a126d8183668cbf15cd8aec4cf49bb3f",
"text": "The present meta-analysis investigated the effectiveness of strategies derived from the process model of emotion regulation in modifying emotional outcomes as indexed by experiential, behavioral, and physiological measures. A systematic search of the literature identified 306 experimental comparisons of different emotion regulation (ER) strategies. ER instructions were coded according to a new taxonomy, and meta-analysis was used to evaluate the effectiveness of each strategy across studies. The findings revealed differences in effectiveness between ER processes: Attentional deployment had no effect on emotional outcomes (d(+) = 0.00), response modulation had a small effect (d(+) = 0.16), and cognitive change had a small-to-medium effect (d(+) = 0.36). There were also important within-process differences. We identified 7 types of attentional deployment, 4 types of cognitive change, and 4 types of response modulation, and these distinctions had a substantial influence on effectiveness. Whereas distraction was an effective way to regulate emotions (d(+) = 0.27), concentration was not (d(+) = -0.26). Similarly, suppressing the expression of emotion proved effective (d(+) = 0.32), but suppressing the experience of emotion or suppressing thoughts of the emotion-eliciting event did not (d(+) = -0.04 and -0.12, respectively). Finally, reappraising the emotional response proved less effective (d(+) = 0.23) than reappraising the emotional stimulus (d(+) = 0.36) or using perspective taking (d(+) = 0.45). The review also identified several moderators of strategy effectiveness including factors related to the (a) to-be-regulated emotion, (b) frequency of use and intended purpose of the ER strategy, (c) study design, and (d) study characteristics.",
"title": ""
},
{
"docid": "63de624a33f7c9362b477aabd9faac51",
"text": "24 GHz circularly polarized Doppler front-end with a single antenna is developed. The radar system is composed of 24 GHz circularly polarized Doppler radar module, signal conditioning block, DAQ unit, and signal processing program. 24 GHz Doppler radar receiver front-end IC which is comprised of 3-stage LNA, single-ended mixer, and Lange coupler is fabricated with commercial InGaP/GaAs HBT technology. To reduce the chip size and suppress self-mixing, single-ended mixer which uses Tx leakage as a LO signal of the mixer is used. The operation of the developed radar front-end is demonstrated by measuring human vital signal. Compact size and high sensitivity can be achieved at the same time with the circularly polarized Doppler radar with a single antenna.",
"title": ""
},
{
"docid": "b7bfebcf77d9486473b9fcd1f4b91e63",
"text": "One of the most widespread applications of the Global Positioning System (GPS) is vehicular navigation. Improving the navigation accuracy continues to be a focus of research, commonly answered by the use of additional sensors. A sensor commonly fused with GPS is the inertial measurement unit (IMU). Due to the fact that the requirements of commercial systems are low cost, small size, and power conservative, micro-electro mechanical sensors (MEMS) IMUs are used. They provide navigation capability even in the absence of GPS signals or in the presence of high multipath or jamming. This paper addresses a centralized filter construction whereby navigation solutions from multiple IMUs are fused together to improve accuracy in GPS degraded areas. The proposed filter is a collection of several single IMU block filters. Each block filter is a 21 state IMU filter. Because each block filter estimates position, velocity and attitude, the system can utilize relative updates between the IMUs. These relative updates provide a method of reducing the position drift in the absence of GPS observations. The proposed filter’s performance is analyzed as a function of the number of IMUs used and relative update type, using a data set consisting of GPS outages, urban canyons and residential open sky conditions. While the use of additional IMUs (including a single IMU) provides negligible improvement in open sky conditions (where GPS alone is sufficient), the use of two, three, four and five IMUs provided a horizontal position improvement of 25 %, 29 %, 32 %, and 34 %, respectively, when GPS observations are removed for 30 seconds. Similarly, the velocity RMS improved by 25 %, 31%, 33%, and 34% for two, three, four and five IMUs, respectively. Attitude estimation also improves significantly ranging from 30 % – 76 %. Results also indicate that the use of more IMUs provides the system with better multipath rejection and performance in urban canyons.",
"title": ""
},
{
"docid": "e849cdf1237792fdf7bcded91c35c398",
"text": "Purpose – System usage and user satisfaction are widely accepted and used as surrogate measures of IS success. Past studies attempted to explore the relationship between system usage and user satisfaction but findings are mixed, inconclusive and misleading. The main objective of this research is to better understand and explain the nature and strength of the relationship between system usage and user satisfaction by resolving the existing inconsistencies in the IS research and to validate this relationship empirically as defined in Delone and McLean’s IS success model. Design/methodology/approach – “Meta-analysis” as a research approach was adopted because of its suitability regarding the nature of the research and its capability of dealing with exploring relationships that may be obscured in other approaches to synthesize research findings. Meta-analysis findings contributed towards better explaining the relationship between system usage and user satisfaction, the main objectives of this research. Findings – This research examines critically the past findings and resolves the existing inconsistencies. The meta-analysis findings explain that there exists a significant positive relationship between “system usage” and “user satisfaction” (i.e. r 1⁄4 0:2555) although not very strong. This research empirically validates this relationship that has already been proposed by Delone and McLean in their IS success model. Provides a guide for future research to explore the mediating variables that might affect the relationship between system usage and user satisfaction. Originality/value – This research better explains the relationship between system usage and user satisfaction by resolving contradictory findings in the past research and contributes to the existing body of knowledge relating to IS success.",
"title": ""
},
{
"docid": "ff9e0e5c2bb42955d3d29db7809414a1",
"text": "We present a novel methodology for the automated detection of breast lesions from dynamic contrast-enhanced magnetic resonance volumes (DCE-MRI). Our method, based on deep reinforcement learning, significantly reduces the inference time for lesion detection compared to an exhaustive search, while retaining state-of-art accuracy. This speed-up is achieved via an attention mechanism that progressively focuses the search for a lesion (or lesions) on the appropriate region(s) of the input volume. The attention mechanism is implemented by training an artificial agent to learn a search policy, which is then exploited during inference. Specifically, we extend the deep Q-network approach, previously demonstrated on simpler problems such as anatomical landmark detection, in order to detect lesions that have a significant variation in shape, appearance, location and size. We demonstrate our results on a dataset containing 117 DCE-MRI volumes, validating run-time and accuracy of lesion detection.",
"title": ""
},
{
"docid": "63405a3fc4815e869fc872bb96bb8a33",
"text": "We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas.",
"title": ""
},
{
"docid": "c347f649a6a183d7ee3f5abddfcbc2a1",
"text": "Concern has grown regarding possible harm to the social and psychological development of children and adolescents exposed to Internet pornography. Parents, academics and researchers have documented pornography from the supply side, assuming that its availability explains consumption satisfactorily. The current paper explored the user's dimension, probing whether pornography consumers differed from other Internet users, as well as the social characteristics of adolescent frequent pornography consumers. Data from a 2004 survey of a national representative sample of the adolescent population in Israel were used (n=998). Adolescent frequent users of the Internet for pornography were found to differ in many social characteristics from the group that used the Internet for information, social communication and entertainment. Weak ties to mainstream social institutions were characteristic of the former group but not of the latter. X-rated material consumers proved to be a distinct sub-group at risk of deviant behaviour.",
"title": ""
},
{
"docid": "5cb44c68cecb0618be14cd52182dc96e",
"text": "Recognition of objects using Deep Neural Networks is an active area of research and many breakthroughs have been made in the last few years. The paper attempts to indicate how far this field has progressed. The paper briefly describes the history of research in Neural Networks and describe several of the recent advances in this field. The performances of recently developed Neural Network Algorithm over benchmark datasets have been tabulated. Finally, some the applications of this field have been provided.",
"title": ""
},
{
"docid": "527e70797ec7931687d17d26f1f64428",
"text": "We experimentally demonstrate the focusing of visible light with ultra-thin, planar metasurfaces made of concentrically perforated, 30-nm-thick gold films. The perforated nano-voids—Babinet-inverted (complementary) nano-antennas—create discrete phase shifts and form a desired wavefront of cross-polarized, scattered light. The signal-to-noise ratio in our complementary nano-antenna design is at least one order of magnitude higher than in previous metallic nano-antenna designs. We first study our proof-of-concept ‘metalens’ with extremely strong focusing ability: focusing at a distance of only 2.5 mm is achieved experimentally with a 4-mm-diameter lens for light at a wavelength of 676 nm. We then extend our work with one of these ‘metalenses’ and achieve a wavelength-controllable focal length. Optical characterization of the lens confirms that switching the incident wavelength from 676 to 476 nm changes the focal length from 7 to 10 mm, which opens up new opportunities for tuning and spatially separating light at different wavelengths within small, micrometer-scale areas. All the proposed designs can be embedded on-chip or at the end of an optical fiber. The designs also all work for two orthogonal, linear polarizations of incident light. Light: Science & Applications (2013) 2, e72; doi:10.1038/lsa.2013.28; published online 26 April 2013",
"title": ""
},
{
"docid": "2fdf6538c561e05741baafe43ec6f145",
"text": "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.",
"title": ""
},
{
"docid": "3848dd7667a25e8e7f69ecc318324224",
"text": "This paper describes the CloudProtect middleware that empowers users to encrypt sensitive data stored within various cloud applications. However, most web applications require data in plaintext for implementing the various functionalities and in general, do not support encrypted data management. Therefore, CloudProtect strives to carry out the data transformations (encryption/decryption) in a manner that is transparent to the application, i.e., preserves all functionalities of the application, including those that require data to be in plaintext. Additionally, CloudProtect allows users flexibility in trading off performance for security in order to let them optimally balance their privacy needs and usage-experience.",
"title": ""
},
{
"docid": "af9137900cd3fe09d9bea87f38324b80",
"text": "The cognitive walkthrough is a technique for evaluating the design of a user interface, with speciaJ attention to how well the interface supports “exploratory learning,” i.e., first-time use without formal training. The evaluation can be performed by the system’s designers in the e,arly stages of design, before empirical user testing is possible. Early versions of the walkthrough method relied on a detailed series of questions, to be answered on paper or electronic forms. This tutorial presents a simpler method, founded in an understanding of the cognitive theory that describes a user’s interactions with a system. The tutorial refines the method on the basis of recent empirical and theoretical studies of exploratory learning with display-based interfaces. The strengths and limitations of the walkthrough method are considered, and it is placed into the context of a more complete design approach.",
"title": ""
},
{
"docid": "83525470a770a036e9c7bb737dfe0535",
"text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.",
"title": ""
},
{
"docid": "2dbc68492e54d61446dac7880db71fdd",
"text": "Supervised deep learning methods have shown promising results for the task of monocular depth estimation; but acquiring ground truth is costly, and prone to noise as well as inaccuracies. While synthetic datasets have been used to circumvent above problems, the resultant models do not generalize well to natural scenes due to the inherent domain shift. Recent adversarial approaches for domain adaption have performed well in mitigating the differences between the source and target domains. But these methods are mostly limited to a classification setup and do not scale well for fully-convolutional architectures. In this work, we propose AdaDepth - an unsupervised domain adaptation strategy for the pixel-wise regression task of monocular depth estimation. The proposed approach is devoid of above limitations through a) adversarial learning and b) explicit imposition of content consistency on the adapted target representation. Our unsupervised approach performs competitively with other established approaches on depth estimation tasks and achieves state-of-the-art results in a semi-supervised setting.",
"title": ""
},
{
"docid": "da6771ebd128ce1dc58f2ab1d56b065f",
"text": "We present a method for the automatic classification of text documents into a dynamically defined set of topics of interest. The proposed approach requires only a domain ontology and a set of user-defined classification topics, specified as contexts in the ontology. Our method is based on measuring the semantic similarity of the thematic graph created from a text document and the ontology sub-graphs resulting from the projection of the defined contexts. The domain ontology effectively becomes the classifier, where classification topics are expressed using the defined ontological contexts. In contrast to the traditional supervised categorization methods, the proposed method does not require a training set of documents. More importantly, our approach allows dynamically changing the classification topics without retraining of the classifier. In our experiments, we used the English language Wikipedia converted to an RDF ontology to categorize a corpus of current Web news documents into selection of topics of interest. The high accuracy achieved in our tests demonstrates the effectiveness of the proposed method, as well as the applicability of Wikipedia for semantic text categorization purposes.",
"title": ""
},
{
"docid": "3512d0a45a764330c8a66afab325d03d",
"text": "Self-concept clarity (SCC) references a structural aspect oftbe self-concept: the extent to which selfbeliefs are clearly and confidently defined, internally consistent, and stable. This article reports the SCC Scale and examines (a) its correlations with self-esteem (SE), the Big Five dimensions, and self-focused attention (Study l ); (b) its criterion validity (Study 2); and (c) its cultural boundaries (Study 3 ). Low SCC was independently associated with high Neuroticism, low SE, low Conscientiousness, low Agreeableness, chronic self-analysis, low internal state awareness, and a ruminative form of self-focused attention. The SCC Scale predicted unique variance in 2 external criteria: the stability and consistency of self-descriptions. Consistent with theory on Eastern and Western selfconstruals, Japanese participants exhibited lower levels of SCC and lower correlations between SCC and SE than did Canadian participants.",
"title": ""
},
{
"docid": "9924e44d94d00a7a3dbd313409f5006a",
"text": "Multiple-instance problems arise from the situations where training class labels are attached to sets of samples (named bags), instead of individual samples within each bag (called instances). Most previous multiple-instance learning (MIL) algorithms are developed based on the assumption that a bag is positive if and only if at least one of its instances is positive. Although the assumption works well in a drug activity prediction problem, it is rather restrictive for other applications, especially those in the computer vision area. We propose a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple-instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. MILES maps each bag into a feature space defined by the instances in the training bags via an instance similarity measure. This feature mapping often provides a large number of redundant or irrelevant features. Hence, 1-norm SVM is applied to select important features as well as construct classifiers simultaneously. We have performed extensive experiments. In comparison with other methods, MILES demonstrates competitive classification accuracy, high computation efficiency, and robustness to labeling uncertainty",
"title": ""
}
] | scidocsrr |
98b324cdf56d562cf57cf375552f42be | Big Data Deep Learning: Challenges and Perspectives | [
{
"docid": "70e6148316bd8915afd8d0908fb5ab0d",
"text": "We consider the problem of using a large unla beled sample to boost performance of a learn ing algorithm when only a small set of labeled examples is available In particular we con sider a problem setting motivated by the task of learning to classify web pages in which the description of each example can be partitioned into two distinct views For example the de scription of a web page can be partitioned into the words occurring on that page and the words occurring in hyperlinks that point to that page We assume that either view of the example would be su cient for learning if we had enough labeled data but our goal is to use both views together to allow inexpensive unlabeled data to augment a much smaller set of labeled ex amples Speci cally the presence of two dis tinct views of each example suggests strategies in which two learning algorithms are trained separately on each view and then each algo rithm s predictions on new unlabeled exam ples are used to enlarge the training set of the other Our goal in this paper is to provide a PAC style analysis for this setting and more broadly a PAC style framework for the general problem of learning from both labeled and un labeled data We also provide empirical results on real web page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice This paper is to appear in the Proceedings of the Conference on Computational Learning Theory This research was supported in part by the DARPA HPKB program under contract F and by NSF National Young Investigator grant CCR INTRODUCTION In many machine learning settings unlabeled examples are signi cantly easier to come by than labeled ones One example of this is web page classi cation Suppose that we want a program to electronically visit some web site and download all the web pages of interest to us such as all the CS faculty member pages or all the course home pages at some university To train such a system to automatically classify web pages one would typically rely on hand labeled web pages These labeled examples are fairly expensive to obtain because they require human e ort In contrast the web has hundreds of millions of unlabeled web pages that can be inexpensively gathered using a web crawler Therefore we would like our learning algorithm to be able to take as much advantage of the unlabeled data as possible This web page learning problem has an interesting feature Each example in this domain can naturally be described using several di erent kinds of information One kind of information about a web page is the text appearing on the document itself A second kind of information is the anchor text attached to hyperlinks pointing to this page from other pages on the web The two problem characteristics mentioned above availability of both labeled and unlabeled data and the availability of two di erent kinds of information about examples suggest the following learning strat egy Using an initial small set of labeled examples nd weak predictors based on each kind of information for instance we might nd that the phrase research inter ests on a web page is a weak indicator that the page is a faculty home page and we might nd that the phrase my advisor on a link is an indicator that the page being pointed to is a faculty page Then attempt to bootstrap from these weak predictors using unlabeled data For instance we could search for pages pointed to with links having the phrase my advisor and use them as probably positive examples to further train a learning algorithm based on the words on the text page and vice versa We call this type of bootstrapping co training and it has a close connection to bootstrapping from incomplete data in the Expectation Maximization setting see for instance The question this raises is is there any reason to believe co training will help Our goal is to address this question by developing a PAC style theoretical framework to better understand the issues involved in this approach We also give some preliminary empirical results on classifying university web pages see Section that are encouraging in this context More broadly the general question of how unlabeled examples can be used to augment labeled data seems a slippery one from the point of view of standard PAC as sumptions We address this issue by proposing a notion of compatibility between a data distribution and a target function Section and discuss how this relates to other approaches to combining labeled and unlabeled data Section",
"title": ""
},
{
"docid": "c0d794e7275e7410998115303bf0cf79",
"text": "We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.",
"title": ""
}
] | [
{
"docid": "d1b79ace26173ebe954bca25a06c5e34",
"text": "Recent proposals for deterministic database system designs argue that deterministic database systems facilitate replication since the same input can be independently sent to two different replicas without concern for replica divergence. In addition, they argue that determinism yields performance benefits due to (1) the introduction of deadlock avoidance techniques, (2) the reduction (or elimination) of distributed commit protocols, and (3) light-weight locking. However, these performance benefits are not universally applicable, and there exist several disadvantages of determinism, including (1) the additional overhead of processing transactions for which it is not known in advance what data will be accessed, (2) an inability to abort transactions arbitrarily (e.g., in the case of database or partition overload), and (3) the increased latency required by a preprocessing layer that ensures that the same input is sent to every replica. This paper presents a thorough experimental study that carefully investigates both the advantages and disadvantages of determinism, in order to give a database user a more complete understanding of which database to use for a given database workload and cluster configuration.",
"title": ""
},
{
"docid": "be4defd26cf7c7a29a85da2e15132be9",
"text": "The quantity of rooftop solar photovoltaic (PV) installations has grown rapidly in the US in recent years. There is a strong interest among decision makers in obtaining high quality information about rooftop PV, such as the locations, power capacity, and energy production of existing rooftop PV installations. Solar PV installations are typically connected directly to local power distribution grids, and therefore it is important for the reliable integration of solar energy to have information at high geospatial resolutions: by county, zip code, or even by neighborhood. Unfortunately, traditional means of obtaining this information, such as surveys and utility interconnection filings, are limited in availability and geospatial resolution. In this work a new approach is investigated where a computer vision algorithm is used to detect rooftop PV installations in high resolution color satellite imagery and aerial photography. It may then be possible to use the identified PV images to estimate power capacity and energy production for each array of panels, yielding a fast, scalable, and inexpensive method to obtain rooftop PV estimates for regions of any size. The aim of this work is to investigate the feasibility of the first step of the proposed approach: detecting rooftop PV in satellite imagery. Towards this goal, a collection of satellite rooftop images is used to develop and evaluate a detection algorithm. The results show excellent detection performance on the testing dataset and that, with further development, the proposed approach may be an effective solution for fast and scalable rooftop PV information collection.",
"title": ""
},
{
"docid": "6b4a4e5271f5a33d3f30053fc6c1a4ff",
"text": "Based on environmental, legal, social, and economic factors, reverse logistics and closed-loop supply chain issues have attracted attention among both academia and practitioners. This attention is evident by the vast number of publications in scientific journals which have been published in recent years. Hence, a comprehensive literature review of recent and state-of-the-art papers is vital to draw a framework of the past, and to shed light on future directions. The aim of this paper is to review recently published papers in reverse logistic and closed-loop supply chain in scientific journals. A total of 382 papers published between January 2007 and March 2013 are selected and reviewed. The papers are then analyzed and categorized to construct a useful foundation of past research. Finally, gaps in the literature are identified to clarify and to suggest future research opportunities. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "501f9cb511e820c881c389171487f0b4",
"text": "An omnidirectional circularly polarized (CP) antenna array is proposed. The antenna array is composed of four identical CP antenna elements and one parallel strip-line feeding network. Each of CP antenna elements comprises a dipole and a zero-phase-shift (ZPS) line loop. The in-phase fed dipole and the ZPS line loop generate vertically and horizontally polarized omnidirectional radiation, respectively. Furthermore, the vertically polarized dipole is positioned in the center of the horizontally polarized ZPS line loop. The size of the loop is designed such that a 90° phase difference is realized between the two orthogonal components because of the spatial difference and, therefore, generates CP omnidirectional radiation. A 1 × 4 antenna array at 900 MHz is prototyped and targeted to ultra-high frequency (UHF) radio frequency identification (RFID) applications. The measurement results show that the antenna array achieves a 10-dB return loss over a frequency range of 900-935 MHz and 3-dB axial-ratio (AR) from 890 to 930 MHz. At the frequency of 915 MHz, the measured maximum AR of 1.53 dB, maximum gain of 5.4 dBic, and an omnidirectionality of ±1 dB are achieved.",
"title": ""
},
{
"docid": "5b32a82676846632b0f4d1bf0941156c",
"text": "In this paper, we present the design of a Constrained Application Protocol (CoAP) proxy able to interconnect Web applications based on Hypertext Transfer Protocol (HTTP) and WebSocket with CoAP based Wireless Sensor Networks. Sensor networks are commonly used to monitor and control physical objects or environments. Smart Cities represent applications of such a nature. Wireless Sensor Networks gather data from their surroundings and send them to a remote application. This data flow may be short or long lived. The traditional HTTP long-polling used by Web applications may not be adequate in long-term communications. To overcome this problem, we include the WebSocket protocol in the design of the CoAP proxy. We evaluate the performance of the CoAP proxy in terms of latency and memory consumption. The tests consider long and short-lived communications. In both cases, we evaluate the performance obtained by the CoAP proxy according to the use of WebSocket and HTTP long-polling.",
"title": ""
},
{
"docid": "e767659e0d8a778dacda0f6642a3d292",
"text": "Alrstract-We present a new self-organizing neural network model that has two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches ( e.g., the Kohonen feature map) is the ability o f the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process that also includes occasional removal o f units. The second variant of the model is a supervised learning method that results from the combination of the above-mentioned self-organizing network with the radial basis function (RBF) approach. In this model it is possible--in contrast to earlier approaches--to perform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks that generalize very well. Results on the two-spirals benchmark and a vowel classification problem are presented that are better than any results previously published.",
"title": ""
},
{
"docid": "a7ac6803295b7359f5c8c0fcdd26e0e7",
"text": "The Internet of Things (IoT), the idea of getting real-world objects connected with each other, will change the way users organize, obtain and consume information radically. Internet of Things (IoT) enables various applications (crop growth monitoring and selection, irrigation decision support, etc.) in Digital Agriculture domain. The Wireless Sensors Network (WSN) is widely used to build decision support systems. These systems overcomes many problems in the real-world. One of the most interesting fields having an increasing need of decision support systems is Precision Agriculture (PA). Through sensor networks, agriculture can be connected to the IoT, which allows us to create connections among agronomists, farmers and crops regardless of their geographical differences. With the help of this approach which provides real-time information about the lands and crops that will help farmers make right decisions. The major advantage is implementation of WSN in Precision Agriculture (PA) will optimize the usage of water fertilizers while maximizing the yield of the crops and also will help in analyzing the weather conditions of the field.",
"title": ""
},
{
"docid": "fde101a0604eaa703979c56aa3ab8e93",
"text": "Community Question Answering (cQA) forums have become a popular medium for soliciting direct answers to specific questions of users from experts or other experienced users on a given topic. However, for a given question, users sometimes have to sift through a large number of low-quality or irrelevant answers to find out the answer which satisfies their information need. To alleviate this, the problem of Answer Quality Prediction (AQP) aims to predict the quality of an answer posted in response to a forum question. Current AQP systems either learn models using a) various hand-crafted features (HCF) or b) use deep learning (DL) techniques which automatically learn the required feature representations. In this paper, we propose a novel approach for AQP known as -“Deep Feature Fusion Network (DFFN)”which leverages the advantages of both hand-crafted features and deep learning based systems. Given a question-answer pair along with its metadata, DFFN independently a) learns deep features using a Convolutional Neural Network (CNN) and b) computes hand-crafted features using various external resources and then combines them using a deep neural network trained to predict the final answer quality. DFFN achieves stateof-the-art performance on the standard SemEval-2015 and SemEval-2016 benchmark datasets and outperforms baseline approaches which individually employ either HCF or DL based techniques alone.",
"title": ""
},
{
"docid": "5a583fe6fae9f0624bcde5043c56c566",
"text": "In this paper, a microstrip dipole antenna on a flexible organic substrate is proposed. The antenna arms are tilted to make different variations of the dipole with more compact size and almost same performance. The antennas are fed using a coplanar stripline (CPS) geometry (Simons, 2001). The antennas are then conformed over cylindrical surfaces and their performances are compared to their flat counterparts. Good performance is achieved for both the flat and conformal antennas.",
"title": ""
},
{
"docid": "1e3729164ecb6b74dbe5c9019bff7ae4",
"text": "Serverless or functions as a service runtimes have shown significant benefits to efficiency and cost for event-driven cloud applications. Although serverless runtimes are limited to applications requiring lightweight computation and memory, such as machine learning prediction and inference, they have shown improvements on these applications beyond other cloud runtimes. Training deep learning can be both compute and memory intensive. We investigate the use of serverless runtimes while leveraging data parallelism for large models, show the challenges and limitations due to the tightly coupled nature of such models, and propose modifications to the underlying runtime implementations that would mitigate them. For hyperparameter optimization of smaller deep learning models, we show that serverless runtimes can provide significant benefit.",
"title": ""
},
{
"docid": "6b530ee6c18f0c71b9b057108b2b2174",
"text": "We present a multi-modulus frequency divider based upon novel dual-modulus 4/5 and 2/3 true single-phase clocked (TSPC) prescalers. High-speed and low-power operation was achieved by merging the combinatorial counter logic with the flip-flop stages and removing circuit nodes at the expense of allowing a small short-circuit current during a short fraction of the operation cycle, thus minimizing the amount of nodes in the circuit. The divider is designed for operation in wireline or fibre-optic serial link transceivers with programmable divider ratios of 64, 80, 96, 100, 112, 120 and 140. The divider is implemented as part of a phase-locked loop around a quadrature voltage controlled oscillator in a 65nm CMOS technology. The maximum operating frequency is measured to be 17GHz with 2mW power consumption from a 1.0V supply voltage, and occupies 25×50μm2.",
"title": ""
},
{
"docid": "b9aeddd06de72d36e70a38a108132326",
"text": "Numerous surveys have shown that Web users are concerned about the loss of privacy associated with online tracking. Alarmingly, these surveys also reveal that people are also unaware of the amount of data sharing that occurs between ad exchanges, and thus underestimate the privacy risks associated with online tracking. In reality, the modern ad ecosystem is fueled by a flow of user data between trackers and ad exchanges. Although recent work has shown that ad exchanges routinely perform cookie matching with other exchanges, these studies are based on brittle heuristics that cannot detect all forms of information sharing, especially under adversarial conditions. In this study, we develop a methodology that is able to detect clientand server-side flows of information between arbitrary ad exchanges. Our key insight is to leverage retargeted ads as a tool for identifying information flows. Intuitively, our methodology works because it relies on the semantics of how exchanges serve ads, rather than focusing on specific cookie matching mechanisms. Using crawled data on 35,448 ad impressions, we show that our methodology can successfully categorize four different kinds of information sharing behavior between ad exchanges, including cases where existing heuristic methods fail. We conclude with a discussion of how our findings and methodologies can be leveraged to give users more control over what kind of ads they see and how their information is shared between ad exchanges.",
"title": ""
},
{
"docid": "fa8d8eda07b7045f69325670ba6aff27",
"text": "A three-axis tactile force sensor that determines the touch and slip/friction force may advance artificial skin and robotic applications by fully imitating human skin. The ability to detect slip/friction and tactile forces simultaneously allows unknown objects to be held in robotic applications. However, the functionalities of flexible devices have been limited to a tactile force in one direction due to difficulties fabricating devices on flexible substrates. Here we demonstrate a fully printed fingerprint-like three-axis tactile force and temperature sensor for artificial skin applications. To achieve economic macroscale devices, these sensors are fabricated and integrated using only printing methods. Strain engineering enables the strain distribution to be detected upon applying a slip/friction force. By reading the strain difference at four integrated force sensors for a pixel, both the tactile and slip/friction forces can be analyzed simultaneously. As a proof of concept, the high sensitivity and selectivity for both force and temperature are demonstrated using a 3×3 array artificial skin that senses tactile, slip/friction, and temperature. Multifunctional sensing components for a flexible device are important advances for both practical applications and basic research in flexible electronics.",
"title": ""
},
{
"docid": "516ef94fad7f7e5801bf1ef637ffb136",
"text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1",
"title": ""
},
{
"docid": "b6e2cc26befb5ccf0cd829f72354e6e0",
"text": "In this paper we explore the potential of quantum theory as a formal framework for capturing lexical meaning. We present a novel semantic space model that is syntactically aware, takes word order into account, and features key quantum aspects such as superposition and entanglement. We define a dependency-based Hilbert space and show how to represent the meaning of words by density matrices that encode dependency neighborhoods. Experiments on word similarity and association reveal that our model achieves results competitive with a variety of classical models.",
"title": ""
},
{
"docid": "01a4b2be52e379db6ace7fa8ed501805",
"text": "The goal of our work is to complete the depth channel of an RGB-D image. Commodity-grade depth cameras often fail to sense depth for shiny, bright, transparent, and distant surfaces. To address this problem, we train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation. This method was chosen over others (e.g., inpainting depths directly) as the result of extensive experiments with a new depth completion benchmark dataset, where holes are filled in training data through the rendering of surface reconstructions created from multiview RGB-D scans. Experiments with different network inputs, depth representations, loss functions, optimization methods, inpainting methods, and deep depth estimation networks show that our proposed approach provides better depth completions than these alternatives.",
"title": ""
},
{
"docid": "2d7de5390fdcd15fc3dbaa5d39cc0f1b",
"text": "This paper presents an experimental 8-element circular phased patch array antenna which can generate radio beams carrying mode -1 orbital angular momentum. A microstrip feeding network is used to excite the radiation elements. Both input impedance and radiation patterns are investigated based on numerical and experimental methods.",
"title": ""
},
{
"docid": "b418470025d74d745e75225861a1ed7e",
"text": "The brain which is composed of more than 100 billion nerve cells is a sophisticated biochemical factory. For many years, neurologists, psychotherapists, researchers, and other health care professionals have studied the human brain. With the development of computer and information technology, it makes brain complex spectrum analysis to be possible and opens a highlight field for the study of brain science. In the present work, observation and exploring study of the activities of brain under brainwave music stimulus are systemically made by experimental and spectrum analysis technology. From our results, the power of the 10.5Hz brainwave appears in the experimental figures, it was proved that upper alpha band is entrained under the special brainwave music. According to the Mozart effect and the analysis of improving memory performance, the results confirm that upper alpha band is indeed related to the improvement of learning efficiency.",
"title": ""
},
{
"docid": "9892b1c48afb42443e7957fe85f5cb27",
"text": "In this paper, we propose a new adaptive rendering method to improve the performance of Monte Carlo ray tracing, by reducing noise contained in rendered images while preserving high-frequency edges. Our method locally approximates an image with polynomial functions and the optimal order of each polynomial function is estimated so that our reconstruction error can be minimized. To robustly estimate the optimal order, we propose a multi-stage error estimation process that iteratively estimates our reconstruction error. In addition, we present an energy-preserving outlier removal technique to remove spike noise without causing noticeable energy loss in our reconstruction result. Also, we adaptively allocate additional ray samples to high error regions guided by our error estimation. We demonstrate that our approach outperforms state-of-the-art methods by controlling the tradeoff between reconstruction bias and variance through locally defining our polynomial order, even without need for filtering bandwidth optimization, the common approach of other recent methods.",
"title": ""
},
{
"docid": "1b6f2bfa0d52db3d12aa9fffbb722386",
"text": "PHP is a popular language for building websites, but also notoriously lax in that almost every value can be coerced into a value of any imaginable type. Therefore it often happens that PHP code does not behave as expected. We have devised a flexible system that can assist a programmer in discovering suspicious pieces of PHP code, accompanied by a measure of suspicion. The analysis we employ is constraint-based, uses a limited amount of context to improve precision for non-global variables, and applies widening to ensure termination. We have applied the system to a number of implementations made by programmers of various degrees of proficiency, showing that even with these technically rather simple means it is quite possible to obtain good results.",
"title": ""
}
] | scidocsrr |
595fbfcf14f6c8098a0df77a25bf5788 | Amazon Food Review Classification using Deep Learning and Recommender System | [
{
"docid": "544426cfa613a31ac903041afa946d89",
"text": "Recommender systems have the effect of guiding users in a personalized way to interesting objects in a large space of possible options. Content-based recommendation systems try to recommend items similar to those a given user has liked in the past. Indeed, the basic process performed by a content-based recommender consists in matching up the attributes of a user profile in which preferences and interests are stored, with the attributes of a content object (item), in order to recommend to the user new interesting items. This chapter provides an overview of content-based recommender systems, with the aim of imposing a degree of order on the diversity of the different aspects involved in their design and implementation. The first part of the chapter presents the basic concepts and terminology of contentbased recommender systems, a high level architecture, and their main advantages and drawbacks. The second part of the chapter provides a review of the state of the art of systems adopted in several application domains, by thoroughly describing both classical and advanced techniques for representing items and user profiles. The most widely adopted techniques for learning user profiles are also presented. The last part of the chapter discusses trends and future research which might lead towards the next generation of systems, by describing the role of User Generated Content as a way for taking into account evolving vocabularies, and the challenge of feeding users with serendipitous recommendations, that is to say surprisingly interesting items that they might not have otherwise discovered. Pasquale Lops Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: lops@di.uniba.it Marco de Gemmis Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: degemmis@di.uniba.it Giovanni Semeraro Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: semeraro@di.uniba.it",
"title": ""
},
{
"docid": "0a3f5ff37c49840ec8e59cbc56d31be2",
"text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.",
"title": ""
}
] | [
{
"docid": "ab1642d0e42f1a2e2d0c56c6740903b9",
"text": "The Human Gene Mutation Database (HGMD®) is a comprehensive collection of germline mutations in nuclear genes that underlie, or are associated with, human inherited disease. By June 2013, the database contained over 141,000 different lesions detected in over 5,700 different genes, with new mutation entries currently accumulating at a rate exceeding 10,000 per annum. HGMD was originally established in 1996 for the scientific study of mutational mechanisms in human genes. However, it has since acquired a much broader utility as a central unified disease-oriented mutation repository utilized by human molecular geneticists, genome scientists, molecular biologists, clinicians and genetic counsellors as well as by those specializing in biopharmaceuticals, bioinformatics and personalized genomics. The public version of HGMD ( http://www.hgmd.org ) is freely available to registered users from academic institutions/non-profit organizations whilst the subscription version (HGMD Professional) is available to academic, clinical and commercial users under license via BIOBASE GmbH.",
"title": ""
},
{
"docid": "873a24a210aa57fc22895500530df2ba",
"text": "We describe the winning entry to the Amazon Picking Challenge. From the experience of building this system and competing in the Amazon Picking Challenge, we derive several conclusions: 1) We suggest to characterize robotic system building along four key aspects, each of them spanning a spectrum of solutions—modularity vs. integration, generality vs. assumptions, computation vs. embodiment, and planning vs. feedback. 2) To understand which region of each spectrum most adequately addresses which robotic problem, we must explore the full spectrum of possible approaches. To achieve this, our community should agree on key aspects that characterize the solution space of robotic systems. 3) For manipulation problems in unstructured environments, certain regions of each spectrum match the problem most adequately, and should be exploited further. This is supported by the fact that our solution deviated from the majority of the other challenge entries along each of the spectra.",
"title": ""
},
{
"docid": "c6afc173351fe404f7c5b68d2a0bc0a8",
"text": "BACKGROUND\nCombined traumatic brain injury (TBI) and hemorrhagic shock (HS) is highly lethal. In a nonsurvival model of TBI + HS, addition of high-dose valproic acid (VPA) (300 mg/kg) to hetastarch reduced brain lesion size and associated swelling 6 hours after injury; whether this would have translated into better neurologic outcomes remains unknown. It is also unclear whether lower doses of VPA would be neuroprotective. We hypothesized that addition of low-dose VPA to normal saline (NS) resuscitation would result in improved long-term neurologic recovery and decreased brain lesion size.\n\n\nMETHODS\nTBI was created in anesthetized swine (40-43 kg) by controlled cortical impact, and volume-controlled hemorrhage (40% volume) was induced concurrently. After 2 hours of shock, animals were randomized (n = 5 per group) to NS (3× shed blood) or NS + VPA (150 mg/kg). Six hours after resuscitation, packed red blood cells were transfused, and animals were recovered. Peripheral blood mononuclear cells were analyzed for acetylated histone-H3 at lysine-9. A Neurological Severity Score (NSS) was assessed daily for 30 days. Brain magnetic resonance imaging was performed on Days 3 and 10. Cognitive performance was assessed by training animals to retrieve food from color-coded boxes.\n\n\nRESULTS\nThere was a significant increase in histone acetylation in the NS + VPA-treated animals compared with NS treatment. The NS + VPA group demonstrated significantly decreased neurologic impairment and faster speed of recovery as well as smaller brain lesion size compared with the NS group. Although the final cognitive function scores were similar between the groups, the VPA-treated animals reached the goal significantly faster than the NS controls.\n\n\nCONCLUSION\nIn this long-term survival model of TBI + HS, addition of low-dose VPA to saline resuscitation resulted in attenuated neurologic impairment, faster neurologic recovery, smaller brain lesion size, and a quicker normalization of cognitive functions.",
"title": ""
},
{
"docid": "cc8e675d89a33508a60a88ad6e5f1f55",
"text": "Anomaly detection becomes increasingly important in hyper-spectral image analysis, since it can now uncover many material substances which were previously unresolved by multi-spectral sensors. In this paper, we propose a Low-rank Tensor Decomposition based anomaly Detection (LTDD) algorithm for Hyperspectral Imagery. The HSI data cube is first modeled as a dense low-rank tensor plus a sparse tensor. Based on the obtained low-rank tensor, LTDD further decomposes the low-rank tensor using Tucker decomposition to extract the core tensor which is treated as the “support” of the anomaly spectral signatures. LTDD then adopts an unmixing approach to the reconstructed core tensor for anomaly detection. The experiments based on both simulated and real hyperspectral data sets verify the effectiveness of our algorithm.",
"title": ""
},
{
"docid": "8303d9d6c4abee81bf803240aa929747",
"text": "Kaposi's sarcoma (KS) is a multifocal hemorrhagic sarcoma that occurs primarily on the extremities. KS limited to the penis is rare and a well-recognized manifestation of acquired immune deficiency syndrome (AIDS). However, KS confined to the penis is extraordinary in human immunodeficiency virus (HIV)-negative patients. We present the case of a 68-year-old man with a dark reddish ulcerated nodule on the penile skin, which was reported as a nodular stage of KS. We detected no evidence of immunosuppression or AIDS or systemic involvements in further evaluations. In his past medical history, the patient had undergone three transurethral resections of bladder tumors due to urothelial cell carcinoma since 2000 and total gastrectomy, splenectomy, and adjuvant fluorouracil/cisplatin chemotherapy for 7 months due to advanced gastric carcinoma in 2005. The patient was circumcised and has had no recurrence for 2 years.",
"title": ""
},
{
"docid": "90560bde8f28e715c7c98560e79ee8dc",
"text": "We present a novel postprocessing utility called adaptive geometry image (AGIM) for global parameterization techniques that can embed a 3D surface onto a rectangular domain. This utility first converts a single rectangular parameterization into many different tessellations of square geometry images (GIMs) and then efficiently packs these GIMs into an image called AGIM. Therefore, undersampled regions of the input parameterization can be up-sampled accordingly until the local reconstruction error bound is met. The connectivity of AGIM can be quickly computed and dynamically changed at rendering time. AGIM does not have T-vertices, and therefore, no crack is generated between two neighboring GIMs at different tessellations. Experimental results show that AGIM can achieve significant PSNR gain over the input parameterization, AGIM retains the advantages of the original GIM and reduces the reconstruction error present in the original GIM technique. The AGIM is also suitable for global parameterization techniques based on quadrilateral complexes. Using the approximate sampling rates, the PolyCube-based quadrilateral complexes with AGIM can outperform state-of-the-art multichart GIM technique in terms of PSNR.",
"title": ""
},
{
"docid": "96bb4155000096c1cba6285ad82c9a4d",
"text": "0747-5632/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.chb.2011.10.002 ⇑ Corresponding author. Tel.: +65 6790 6636; fax: + E-mail addresses: leecs@ntu.edu.sg (C.S. Lee), malo 1 Tel.: +65 67905772; fax: +65 6791 5214. Recent events indicate that sharing news in social media has become a phenomenon of increasing social, economic and political importance because individuals can now participate in news production and diffusion in large global virtual communities. Yet, knowledge about factors influencing news sharing in social media remains limited. Drawing from the uses and gratifications (U&G) and social cognitive theories (SCT), this study explored the influences of information seeking, socializing, entertainment, status seeking and prior social media sharing experience on news sharing intention. A survey was designed and administered to 203 students in a large local university. Results from structural equation modeling (SEM) analysis revealed that respondents who were driven by gratifications of information seeking, socializing, and status seeking were more likely to share news in social media platforms. Prior experience with social media was also a significant determinant of news sharing intention. Implications and directions for future work are discussed. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "98f8c85de43a551dfbcf14b6ad2dc6cb",
"text": "ly, schema based data can be defined as a set of data (which is denoted as 'S') that satisfies the following properties: there exists a set of finite size of dimension (which is denoted as 'D') such that every element of S can be expressed as a linear combination of elements from D. Flexible schema based data is the negation of Schema based data. That is, there does NOT exit a set of finite size of dimension D such that every element of S can be expressed as a linear combination of elements from set D. Intuitively, schema based data can have unbounded number of elements but has a bounded dimensions as schema definition whereas flexible schema based data has unbounded dimensions. Because schema based data has finite dimensions, therefore, schema based data can be processed by separating the data away from its dimension so that an element in a schema based data set can be expressed by a vector of values, each of which represents the projection of the element in a particular dimension. All the dimensions are known as schema. Flexible schema based data cannot be processed by separating the data away from its dimension. Each element in a flexible schema based data has to keep track of its dimensions and the corresponding value. An element in a flexible schema based data is expressed by a vector of dimension and value (namevalue pair). Therefore, flexible schema based data requires store, query and index both schema and data together. 3.2 FSD Storage Current Practises Self-contained Document-object-store model: The current practice for storing FSD is to store FSD instances in a FSD collection using document-object-store model where both structure and data are stored together for each FSD instance so that it is self-descriptive without relying on a central schema dictionary. New structures can be added on a per-record basis without dealing with schema evolution. Aggregated storage supports full document-object retrieval efficiently without the cost of querying and stitching pieces of data from multiple relational tables. Each FSD instance can be independently imported, exported, distributed without any schema dependency. Table1 shows DDL to create resumeDoc_tab collection of resume XML documents, a shoppingCar_tab collection of shopping cart JSON objects. SQL/XML standard defines XML as a built-in datatype in SQL. For upcoming SQL/JSON standard [21], it supports storing JSON in SQL varchar, varbinary, CLOB, BLOB datatype with the new ‘IS JSON’ check constraint to ensure the data stored in the column is a valid JSON object. Adding a new domain FSD by storing into existing SQL datatype, such as varchar or LOB, without adding a new SQL type allows the new domain FSD to have full data operational completeness capability (Transactions, Replication, Partition, Security, Provenance, Export/Export, Client APIs etc) support with minimal development efforts. T1 CREATE TABLE resumeDoc_tab (id number, docEnterDate date, docVerifyDate date, resume XMLType) T2 CREATE TABLE shoppingCar_tab (oid number, shoppingCar BLOB check (shoppingCar IS JSON)) Table 1 – Document-Object-Store Table Examples Data-Guide as soft Schema: The data-guide can be computed from FSD collections to understand the complete structures of the data which helps to form queries over FSD collection. That is, FSD management with data-guide supports the paradigm of “storage without schema but query with schema”. For common top-level scalar attributes that exist in all FSD instances of a FSD collection, they can be automatically projected out as virtual columns or flexible table view [21, 22, 24]. For nested master-detail hierarchical structures exist in FSD instances, relational table indexes [11] and materialized views [35], are defined using FSD_TABLE() table function (Q4 in Table 2). They can be built as secondary structures on top of the primary hierarchical FSD storage to provide efficient relational view access of FSD. FSD_TABLE() serves as a bridge between FSD data and relational data. They are flexible because they can be created on demand. See section 5.2 for how to manage FSD_TABLE() and virtual columns as indexing or in-memory columnar structures. Furthermore, to ensure data integrity, soft schema can be defined as check constraint as verification mechanism but not storage mechanism. 3.3 FSD Storage Limitations and Research Challenges Single Hierarchy: The document-object-storage model is essentially a de-normalized storage model with single root hierarchy. When XML support was added into RDBMSs, the IMS hierarchical data model issues were brought up [32]. Fundamentally, the hierarchy storage model re-surfaces the single root hierarchy problem that relational model has resolved successfully. In particular, supporting m-n relationship in one hierarchy is quite awkward. Therefore, a research challenge is how to resolve single hierarchy problem in document-objectstorage mode that satisfies ‘data first, structural later’ requirement. Is there an aggregated storage model, other than E/R model, that can support multi-hierarchy access efficiently? Papers [20, 23] have proposed ideas on approaching certain aspects of this problem. Optimal instance level binary FSD format: The documentobject-storage model is essentially a de-normalized storage where master and detail data are stored together as one hierarchical tree structure, therefore, it is feasible to achieve better query performance than with normalized storage at the expense of update. Other than storing FSD instances in textual form, they can also be stored in a compact binary form native to the FSD domain data so that the binary storage format can be used to efficiently process FSD domain specific query language [3, 22]. In particular, since FSD is a hierarchical structure based, the domain language for hierarchical data is path-driven. The underlying native binary storage form of FSD is tree navigation friendly which improves significant performance improvement than text parsing based processing. The challenge in designing the binary storage format of FSD instance is to optimize the format for both query and update. A query friendly format typically uses compact structures to achieve ultra query performance while leaving no room for accommodating update, especially for the delta-update of a FSD instance involving structural change instead of just leaf value change. The current practise is to do full FSD instance update physically even though logically only components of a FSD instance need to be updated. Although typically a FSD instance is of small to medium size, the update may still cause larger transaction log than updating simple relational columns. A command level logging approach [27] can be investigated to see if it is optimal for high frequent delta-update of FSD instances. Optimal FSD instance size: Although the size of FSD collections can be scaled to very large number, in practise, each FSD instances is of small to medium size instead of single large size. In fact, many vendors have imposed size limit per FSD instance. This is because each FSD instance provides a logical unit for concurrency access control, document and Index update and logging granularity. Supporting single large FSD instance requires RDBMS locking, logging to provide intra-document scalability [43] in addition to the current mature inter-document scalability. 4. Querying and Updating FSD 4.1 FSD Query and Update Requirements A FSD collection is stored as a table of FSD instances. A FSD instance itself is domain specific and typically has its own domain-specific query language. For FSD of XML documents, the domain-specific query language is XQuery. For FSD of JSON objects, the domain-specific query language is the SQL/JSON path language as described in [21]. Table 2 shows the example of SQL/XML[10] and SQL/JSON[21] queries and DML statements embedding XQuery and SQL/JSON path language. In general, the domain-specific query language provides the following requirements: • Capability of querying and navigating document-object structures declaratively: A FSD instance is not shredded into tables since hierarchies in a FSD can be flexible and dynamic without being modelled as a fixed master-detail join pattern. Therefore, it is natural to express hierarchical traversal of FSD as path navigation with value predicate constructs in the FSD domain language. The path name can contain a wildcard name match and the path step can be recursive to facilitate exploratory query of the FSD data. For example, capabilities of the wildcard tag name match and recursive descendant tag match in XPath expressions support the notation of navigating structures without knowing the exact names or the exact hierarchy of the structures. See ‘.//experience’ XPath expression in Q1 and Q2. Such capability is needed to provide flexibility of writing explorative and discovery queries. • Capability of doing full context aware text search declaratively: FSD instances can be document centric with mixture of textual content and structures. There is a significant amount of full text content in FSD that are subject to full text search. However, unlike plain textual document, FSD has text content that is embedded inside hierarchical structure. Full text search can be further confined within a context identified by path navigation into the FSD instance. Therefore, context aware full text search is needed in FSD domain languages. See XQuery full text search expression in XMLEXISTS() predicate of Q1 and Q2 and path-aware full text search expression in JSON_TEXTCONTAINS() predicate of Q3. • Capability of projecting, transforming object component and constructing new document or object: Unlike relational query results which are tuples of scalar data, results of path navigational queries can be fragments of FSD. New FSD can be constructed by extracting components of existing FSD and combine them through construction and transformation. Therefore, constructing and transform",
"title": ""
},
{
"docid": "b42c9db51f55299545588a1ee3f7102f",
"text": "With the increasing development of Web 2.0, such as social media and online businesses, the need for perception of opinions, attitudes, and emotions grows rapidly. Sentiment analysis, the topic studying such subjective feelings expressed in text, has attracted significant attention from both the research community and industry. Although we have known sentiment analysis as a task of mining opinions expressed in text and analyzing the entailed sentiments and emotions, so far the task is still vaguely defined in the research literature because it involves many overlapping concepts and sub-tasks. Because this is an important area of scientific research, the field needs to clear this vagueness and define various directions and aspects in detail, especially for students, scholars, and developers new to the field. In fact, the field includes numerous natural language processing tasks with different aims (such as sentiment classification, opinion information extraction, opinion summarization, sentiment retrieval, etc.) and these have multiple solution paths. Bing Liu has done a great job in this book in providing a thorough exploration and an anatomy of the sentiment analysis problem and conveyed a wealth of knowledge about different aspects of the field.",
"title": ""
},
{
"docid": "99d99ce673dfc4a6f5bf3e7d808a5570",
"text": "We introduce an online popularity prediction and tracking task as a benchmark task for reinforcement learning with a combinatorial, natural language action space. A specified number of discussion threads predicted to be popular are recommended, chosen from a fixed window of recent comments to track. Novel deep reinforcement learning architectures are studied for effective modeling of the value function associated with actions comprised of interdependent sub-actions. The proposed model, which represents dependence between sub-actions through a bi-directional LSTM, gives the best performance across different experimental configurations and domains, and it also generalizes well with varying numbers of recommendation requests.",
"title": ""
},
{
"docid": "9d9856d9e4aa8c66c38509a404f97a8c",
"text": "Constance Flanagan and Peter Levine survey research on civic engagement among U.S. adolescents and young adults. Civic engagement, they say, is important both for the functioning of democracies and for the growth and maturation it encourages in young adults, but opportunities for civic engagement are not evenly distributed by social class or race and ethnicity. Today's young adults, note the authors, are less likely than those in earlier generations to exhibit many important characteristics of citizenship, raising the question of whether these differences represent a decline or simply a delay in traditional adult patterns of civic engagement. Flanagan and Levine also briefly discuss the civic and political lives of immigrant youth in the United States, noting that because these youth make up a significant share of the current generation of young adults, their civic engagement is an important barometer of the future of democracy. The authors next survey differences in civic participation for youth from different social, racial, and ethnic backgrounds. They explore two sets of factors that contribute to a lower rate of civic engagement among low-income and minority young adults. The first is cumulative disadvantage-unequal opportunities and influences before adulthood, especially parental education. The second is different institutional opportunities for civic engagement among college and non-college youth during the young-adult years. Flanagan and Levine survey various settings where young adults spend time-schools and colleges, community organizations, faith-based institutions, community organizing and activism projects, and military and other voluntary service programs-and examine the opportunities for civic engagement that each affords. As the transition to adulthood has lengthened, say the authors, colleges have become perhaps the central institution for civic incorporation of younger generations. But no comparable institution exists for young adults who do not attend college. Opportunities for sustained civic engagement by year-long programs such as City Year could provide an alternative opportunity for civic engagement for young adults from disadvantaged families, allowing them to stay connected to mainstream opportunities and to adults who could mentor and guide their way.",
"title": ""
},
{
"docid": "c074e1fb58a51475337b53c75fba2ccb",
"text": "Given a set of entities, the all-pairs similarity search aims at identifying all pairs of entities that have similarity greater than (or distance smaller than) some user-defined threshold. In this article, we propose a parallel framework for solving this problem in metric spaces. Novel elements of our solution include: i) flexible support for multiple metrics of interest; ii) an autonomic approach to partition the input dataset with minimal redundancy to achieve good load-balance in the presence of limited computing resources; iii) an on-the- fly lossless compression strategy to reduce both the running time and the final output size. We validate the utility, scalability and the effectiveness of the approach on hundreds of machines using real and synthetic datasets.",
"title": ""
},
{
"docid": "9858386550b0193c079f1d7fe2b5b8b3",
"text": "Objective This study examined the associations between household food security (access to sufficient, safe, and nutritious food) during infancy and attachment and mental proficiency in toddlerhood. Methods Data from a longitudinal nationally representative sample of infants and toddlers (n = 8944) from the Early Childhood Longitudinal Study—9-month (2001–2002) and 24-month (2003–2004) surveys were used. Structural equation modeling was used to examine the direct and indirect associations between food insecurity at 9 months, and attachment and mental proficiency at 24 months. Results Food insecurity worked indirectly through depression and parenting practices to influence security of attachment and mental proficiency in toddlerhood. Conclusions Social policies that address the adequacy and predictability of food supplies in families with infants have the potential to affect parental depression and parenting behavior, and thereby attachment and cognitive development at very early ages.",
"title": ""
},
{
"docid": "46e8609b7cf5cfc970aa75fa54d3551d",
"text": "BACKGROUND\nAims were to assess the efficacy of metacognitive training (MCT) in people with a recent onset of psychosis in terms of symptoms as a primary outcome and metacognitive variables as a secondary outcome.\n\n\nMETHOD\nA multicenter, randomized, controlled clinical trial was performed. A total of 126 patients were randomized to an MCT or a psycho-educational intervention with cognitive-behavioral elements. The sample was composed of people with a recent onset of psychosis, recruited from nine public centers in Spain. The treatment consisted of eight weekly sessions for both groups. Patients were assessed at three time-points: baseline, post-treatment, and at 6 months follow-up. The evaluator was blinded to the condition of the patient. Symptoms were assessed with the PANSS and metacognition was assessed with a battery of questionnaires of cognitive biases and social cognition.\n\n\nRESULTS\nBoth MCT and psycho-educational groups had improved symptoms post-treatment and at follow-up, with greater improvements in the MCT group. The MCT group was superior to the psycho-educational group on the Beck Cognitive Insight Scale (BCIS) total (p = 0.026) and self-certainty (p = 0.035) and dependence self-subscale of irrational beliefs, comparing baseline and post-treatment. Moreover, comparing baseline and follow-up, the MCT group was better than the psycho-educational group in self-reflectiveness on the BCIS (p = 0.047), total BCIS (p = 0.045), and intolerance to frustration (p = 0.014). Jumping to Conclusions (JTC) improved more in the MCT group than the psycho-educational group (p = 0.021). Regarding the comparison within each group, Theory of Mind (ToM), Personalizing Bias, and other subscales of irrational beliefs improved in the MCT group but not the psycho-educational group (p < 0.001-0.032).\n\n\nCONCLUSIONS\nMCT could be an effective psychological intervention for people with recent onset of psychosis in order to improve cognitive insight, JTC, and tolerance to frustration. It seems that MCT could be useful to improve symptoms, ToM, and personalizing bias.",
"title": ""
},
{
"docid": "efb2da02ff429f4b9c90b791fa6b4ef5",
"text": "A CMOS single-photon avalanche diode (SPAD)-based quarter video graphics array image sensor with 8-μm pixel pitch and 26.8% fill factor (FF) is presented. The combination of analog pixel electronics and scalable shared-well SPAD devices facilitates high-resolution, high-FF SPAD imaging arrays exhibiting photon shot-noise-limited statistics. The SPAD has 47 counts/s dark count rate at 1.5 V excess bias (EB), 39.5% photon detection probability (PDP) at 480 nm, and a minimum of 1.1 ns dead time at 1 V EB. Analog single-photon counting imaging is demonstrated with maximum 14.2-mV/SPAD event sensitivity and 0.06e- minimum equivalent read noise. Binary quanta image sensor (QIS) 16-kframes/s real-time oversampling is shown, verifying single-photon QIS theory with 4.6× overexposure latitude and 0.168e- read noise.",
"title": ""
},
{
"docid": "5e05aa0d5c8c117d4c5eb47dfea96f2e",
"text": "We tackle the problem of automated exploit generation for web applications. In this regard, we present an approach that significantly improves the state-of-art in web injection vulnerability identification and exploit generation. Our approach for exploit generation tackles various challenges associated with typical web application characteristics: their multi-module nature, interposed user input, and multi-tier architectures using a database backend. Our approach develops precise models of application workflows, database schemas, and native functions to achieve high quality exploit generation. We implemented our approach in a tool called Chainsaw. Chainsaw was used to analyze 9 open source applications and generated over 199 first- and second-order injection exploits combined, significantly outperforming several related approaches.",
"title": ""
},
{
"docid": "d0ab81a7d2bd59de12bd687057d0c8ce",
"text": "Initially seen as a support function, Information Systems (IS) department’s importance has increased as the business environment has grown more dynamic and the power to collect, assess, and disseminate information has expanded. Properly implemented information systems have become an even more valuable strategic resource – one that any organization can use to improve its competitive advantage. IS departments are rapidly becoming strategic partners with other business functions and integral to the general success of the organization. This work summarizes key issues related to the changing role of IS in the business environment for senior practitioners and strategic planners focusing on legal, marketing, HR and corporate governance.",
"title": ""
},
{
"docid": "fb001e2fd9f2f25eb3d9a4ced27a12be",
"text": "Simulation is an appealing option for validating the safety of autonomous vehicles. Generative Adversarial Imitation Learning (GAIL) has recently been shown to learn representative human driver models. These human driver models were learned through training in single-agent environments, but they have difficulty in generalizing to multi-agent driving scenarios. We argue these difficulties arise because observations at training and test time are sampled from different distributions. This difference makes such models unsuitable for the simulation of driving scenes, where multiple agents must interact realistically over long time horizons. We extend GAIL to address these shortcomings through a parameter-sharing approach grounded in curriculum learning. Compared with single-agent GAIL policies, policies generated by our PS-GAIL method prove superior at interacting stably in a multi-agent setting and capturing the emergent behavior of human drivers.",
"title": ""
},
{
"docid": "2113e72a6cd27b1eaa20b705934c5904",
"text": "The formal and informal structures of firms and their external linkages have an important bearing on the rate and direction of innovation. This paper explores the properties of different types of firms with respect to the generation of new technology. Various archetypes are recognized and an effort is made to match organization structure to the type of innovation. The framework is relevant to technology and competition policy as it broadens the framework economists use to identify environments that assist innovation.",
"title": ""
},
{
"docid": "53bd1baec1e740c99a2fd22c858e8e60",
"text": "Garbage collection yields numerous software engineering benefits, but its quantitative impact on performance remains elusive. One can compare the cost of conservative garbage collection to explicit memory management in C/C++ programs by linking in an appropriate collector. This kind of direct comparison is not possible for languages designed for garbage collection (e.g., Java), because programs in these languages naturally do not contain calls to free. Thus, the actual gap between the time and space performance of explicit memory management and precise, copying garbage collection remains unknown.We introduce a novel experimental methodology that lets us quantify the performance of precise garbage collection versus explicit memory management. Our system allows us to treat unaltered Java programs as if they used explicit memory management by relying on oracles to insert calls to free. These oracles are generated from profile information gathered in earlier application runs. By executing inside an architecturally-detailed simulator, this \"oracular\" memory manager eliminates the effects of consulting an oracle while measuring the costs of calling malloc and free. We evaluate two different oracles: a liveness-based oracle that aggressively frees objects immediately after their last use, and a reachability-based oracle that conservatively frees objects just after they are last reachable. These oracles span the range of possible placement of explicit deallocation calls.We compare explicit memory management to both copying and non-copying garbage collectors across a range of benchmarks using the oracular memory manager, and present real (non-simulated) runs that lend further validity to our results. These results quantify the time-space tradeoff of garbage collection: with five times as much memory, an Appel-style generational collector with a non-copying mature space matches the performance of reachability-based explicit memory management. With only three times as much memory, the collector runs on average 17% slower than explicit memory management. However, with only twice as much memory, garbage collection degrades performance by nearly 70%. When physical memory is scarce, paging causes garbage collection to run an order of magnitude slower than explicit memory management.",
"title": ""
}
] | scidocsrr |
f00be14d0d244e4a4a1d68da10e5b06c | Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling | [
{
"docid": "44cf90b2abb22a4a8d9cc031e154cfa0",
"text": "Traditional approaches for learning 3D object categories use either synthetic data or manual supervision. In this paper, we propose a method which does not require manual annotations and is instead cued by observing objects from a moving vantage point. Our system builds on two innovations: a Siamese viewpoint factorization network that robustly aligns different videos together without explicitly comparing 3D shapes; and a 3D shape completion network that can extract the full shape of an object from partial observations. We also demonstrate the benefits of configuring networks to perform probabilistic predictions as well as of geometry-aware data augmentation schemes. We obtain state-of-the-art results on publicly-available benchmarks.",
"title": ""
},
{
"docid": "98cc792a4fdc23819c877634489d7298",
"text": "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.",
"title": ""
},
{
"docid": "16a5313b414be4ae740677597291d580",
"text": "We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. Objects in the 2D images in our database are aligned with the 3D shapes, and the alignment provides both accurate 3D pose annotation and the closest 3D shape annotation for each 2D object. Consequently, our database is useful for recognizing the 3D pose and 3D shape of objects from 2D images. We also provide baseline experiments on four tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval, which can serve as baselines for future research using our database. Our database is available online at http://cvgl.stanford.edu/projects/objectnet3d.",
"title": ""
},
{
"docid": "c6c9643816533237a29dd93fd420018f",
"text": "We present an algorithm for finding a meaningful vertex-to-vertex correspondence between two 3D shapes given as triangle meshes. Our algorithm operates on embeddings of the two shapes in the spectral domain so as to normalize them with respect to uniform scaling and rigid-body transformation. Invariance to shape bending is achieved by relying on geodesic point proximities on a mesh to capture its shape. To deal with stretching, we propose to use non-rigid alignment via thin-plate splines in the spectral domain. This is combined with a refinement step based on the geodesic proximities to improve dense correspondence. We show empirically that our algorithm outperforms previous spectral methods, as well as schemes that compute correspondence in the spatial domain via non-rigid iterative closest points or the use of local shape descriptors, e.g., 3D shape context",
"title": ""
}
] | [
{
"docid": "90414004f8681198328fb48431a34573",
"text": "Process models play important role in computer aided process engineering. Although the structure of these models are a priori known, model parameters should be estimated based on experiments. The accuracy of the estimated parameters largely depends on the information content of the experimental data presented to the parameter identification algorithm. Optimal experiment design (OED) can maximize the confidence on the model parameters. The paper proposes a new additive sequential evolutionary experiment design approach to maximize the amount of information content of experiments. The main idea is to use the identified models to design new experiments to gradually improve the model accuracy while keeping the collected information from previous experiments. This scheme requires an effective optimization algorithm, hence the main contribution of the paper is the incorporation of Evolutionary Strategy (ES) into a new iterative scheme of optimal experiment design (AS-OED). This paper illustrates the applicability of AS-OED for the design of feeding profile for a fed-batch biochemical reactor.",
"title": ""
},
{
"docid": "2292c60d69c94f31c2831c2f21c327d8",
"text": "With the abundance of raw data generated from various sources, Big Data has become a preeminent approach in acquiring, processing, and analyzing large amounts of heterogeneous data to derive valuable evidences. The size, speed, and formats in which data is generated and processed affect the overall quality of information. Therefore, Quality of Big Data (QBD) has become an important factor to ensure that the quality of data is maintained at all Big data processing phases. This paper addresses the QBD at the pre-processing phase, which includes sub-processes like cleansing, integration, filtering, and normalization. We propose a QBD model incorporating processes to support Data quality profile selection and adaptation. In addition, it tracks and registers on a data provenance repository the effect of every data transformation happened in the pre-processing phase. We evaluate the data quality selection module using large EEG dataset. The obtained results illustrate the importance of addressing QBD at an early phase of Big Data processing lifecycle since it significantly save on costs and perform accurate data analysis.",
"title": ""
},
{
"docid": "29f1144b4f3203bab29d7cb6b24fd065",
"text": "Virtual reality (VR)systems let users intuitively interact with 3D environments and have been used extensively for robotic teleoperation tasks. While more immersive than their 2D counterparts, early VR systems were expensive and required specialized hardware. Fortunately, there has been a recent proliferation of consumer-grade VR systems at affordable price points. These systems are inexpensive, relatively portable, and can be integrated into existing robotic frameworks. Our group has designed a VR teleoperation package for the Robot Operating System (ROS), ROS Reality, that can be easily integrated into such frameworks. ROS Reality is an open-source, over-the-Internet teleoperation interface between any ROS-enabled robot and any Unity-compatible VR headset. We completed a pilot study to test the efficacy of our system, with expert human users controlling a Baxter robot via ROS Reality to complete 24 dexterous manipulation tasks, compared to the same users controlling the robot via direct kinesthetic handling. This study provides insight into the feasibility of robotic teleoperation tasks in VR with current consumer-grade resources and exposes issues that need to be addressed in these VR systems. In addition, this paper presents a description of ROS Reality, its components, and architecture. We hope this system will be adopted by other research groups to allow for easy integration of VR teleoperated robots into future experiments.",
"title": ""
},
{
"docid": "ea3fd6ece19949b09fd2f5f2de57e519",
"text": "Multiple myeloma is the second most common hematologic malignancy. The treatment of this disease has changed considerably over the last two decades with the introduction to the clinical practice of novel agents such as proteasome inhibitors and immunomodulatory drugs. Basic research efforts towards better understanding of normal and missing immune surveillence in myeloma have led to development of new strategies and therapies that require the engagement of the immune system. Many of these treatments are under clinical development and have already started providing encouraging results. We, for the second time in the last two decades, are about to witness another shift of the paradigm in the management of this ailment. This review will summarize the major approaches in myeloma immunotherapies.",
"title": ""
},
{
"docid": "9d84f58c0a2c8694bf2fe8d2ba0da601",
"text": "Most existing Speech Emotion Recognition (SER) systems rely on turn-wise processing, which aims at recognizing emotions from complete utterances and an overly-complicated pipeline marred by many preprocessing steps and hand-engineered features. To overcome both drawbacks, we propose a real-time SER system based on end-to-end deep learning. Namely, a Deep Neural Network (DNN) that recognizes emotions from a one second frame of raw speech spectrograms is presented and investigated. This is achievable due to a deep hierarchical architecture, data augmentation, and sensible regularization. Promising results are reported on two databases which are the eNTERFACE database and the Surrey Audio-Visual Expressed Emotion (SAVEE) database.",
"title": ""
},
{
"docid": "86cb3c072e67bed8803892b72297812c",
"text": "Internet of Things (IoT) will comprise billions of devices that can sense, communicate, compute and potentially actuate. Data streams coming from these devices will challenge the traditional approaches to data management and contribute to the emerging paradigm of big data. This paper discusses emerging Internet of Things (IoT) architecture, large scale sensor network applications, federating sensor networks, sensor data and related context capturing techniques, challenges in cloud-based management, storing, archiving and processing of",
"title": ""
},
{
"docid": "47929b2ff4aa29bf115a6728173feed7",
"text": "This paper presents a metaobject protocol (MOP) for C++. This MOP was designed to bring the power of meta-programming to C++ programmers. It avoids penalties on runtime performance by adopting a new meta-architecture in which the metaobjects control the compilation of programs instead of being active during program execution. This allows the MOP to be used to implement libraries of efficient, transparent language extensions.",
"title": ""
},
{
"docid": "f753712eed9e5c210810d2afd1366eb8",
"text": "To improve FPGA performance for arithmetic circuits that are dominated by multi-input addition operations, an FPGA logic block is proposed that can be configured as a 6:2 or 7:2 compressor. Compressors have been used successfully in the past to realize parallel multipliers in VLSI technology; however, the peculiar structure of FPGA logic blocks, coupled with the high cost of the routing network relative to ASIC technology, renders compressors ineffective when mapped onto the general logic of an FPGA. On the other hand, current FPGA logic cells have already been enhanced with carry chains to improve arithmetic functionality, for example, to realize fast ternary carry-propagate addition. The contribution of this article is a new FPGA logic cell that is specialized to help realize efficient compressor trees on FPGAs. The new FPGA logic cell has two variants that can respectively be configured as a 6:2 or a 7:2 compressor using additional carry chains that, coupled with lookup tables, provide the necessary functionality. Experiments show that the use of these modified logic cells significantly reduces the delay of compressor trees synthesized on FPGAs compared to state-of-the-art synthesis techniques, with a moderate increase in area and power consumption.",
"title": ""
},
{
"docid": "ebb024bbd923d35fd86adc2351073a48",
"text": "Background: Depression is a chronic condition that results in considerable disability, and particularly in later life, severely impacts the life quality of the individual with this condition. The first aim of this review article was to summarize, synthesize, and evaluate the research base concerning the use of dance-based exercises on health status, in general, and secondly, specifically for reducing depressive symptoms, in older adults. A third was to provide directives for professionals who work or are likely to work with this population in the future. Methods: All English language peer reviewed publications detailing the efficacy of dance therapy as an intervention strategy for older people in general, and specifically for minimizing depression and dependence among the elderly were analyzed.",
"title": ""
},
{
"docid": "3ab1222d051c42e400940afad76919ce",
"text": "OBJECTIVES\nThe purpose of this study was to evaluate the feasibility, safety, and clinical outcomes up to 1 year in patients undergoing combined simultaneous thoracoscopic surgical and transvenous catheter atrial fibrillation (AF) ablation.\n\n\nBACKGROUND\nThe combination of the transvenous endocardial approach with the thoracoscopic epicardial approach in a single AF ablation procedure overcomes the limitations of both techniques and should result in better outcomes.\n\n\nMETHODS\nA cohort of 26 consecutive patients with AF who underwent hybrid thoracoscopic surgical and transvenous catheter ablation were followed, with follow-up of up to 1 year.\n\n\nRESULTS\nTwenty-six patients (42% with persistent AF) underwent successful hybrid procedures. There were no complications. The mean follow-up period was 470 ± 154 days. In 23% of the patients, the epicardial lesions were not transmural, and endocardial touch-up was necessary. One-year success, defined according to the Heart Rhythm Society, European Heart Rhythm Association, and European Cardiac Arrhythmia Society consensus statement for the catheter and surgical ablation of AF, was 93% for patients with paroxysmal AF and 90% for patients with persistent AF. Two patients underwent catheter ablation for recurrent AF or left atrial flutter after the hybrid procedure.\n\n\nCONCLUSIONS\nA combined transvenous endocardial and thoracoscopic epicardial ablation procedure for AF is feasible and safe, with a single-procedure success rate of 83% at 1 year.",
"title": ""
},
{
"docid": "ad266da12fee45e4fbd060b56e998961",
"text": "Does Child Abuse Cause Crime? Child maltreatment, which includes both child abuse and child neglect, is a major social problem. This paper focuses on measuring the effects of child maltreatment on crime using data from the National Longitudinal Study of Adolescent Health (Add Health). We focus on crime because it is one of the most socially costly potential outcomes of maltreatment, and because the proposed mechanisms linking maltreatment and crime are relatively well elucidated in the literature. Our work addresses many limitations of the existing literature on child maltreatment. First, we use a large national sample, and investigate different types of abuse in a similar framework. Second, we pay careful attention to identifying the causal impact of abuse, by using a variety of statistical methods that make differing assumptions. These methods include: Ordinary Least Squares (OLS), propensity score matching estimators, and twin fixed effects. Finally, we examine the extent to which the effects of maltreatment vary with socio-economic status (SES), gender, and the severity of the maltreatment. We find that maltreatment approximately doubles the probability of engaging in many types of crime. Low SES children are both more likely to be mistreated and suffer more damaging effects. Boys are at greater risk than girls, at least in terms of increased propensity to commit crime. Sexual abuse appears to have the largest negative effects, perhaps justifying the emphasis on this type of abuse in the literature. Finally, the probability of engaging in crime increases with the experience of multiple forms of maltreatment as well as the experience of Child Protective Services (CPS) investigation. JEL Classification: I1, K4",
"title": ""
},
{
"docid": "01875eeb7da3676f46dd9d3f8bf3ecac",
"text": "It is shown that a certain tour of 49 cities, one in each of the 48 states and Washington, D C , has the shortest road distance T HE TRAVELING-SALESMAN PROBLEM might be described as follows: Find the shortest route (tour) for a salesman starting from a given city, visiting each of a specified group of cities, and then returning to the original point of departure. More generally, given an n by n symmetric matrix D={d,j), where du represents the 'distance' from / to J, arrange the points in a cyclic order in such a way that the sum of the du between consecutive points is minimal. Since there are only a finite number of possibilities (at most 3>' 2 (« —1)0 to consider, the problem is to devise a method of picking out the optimal arrangement which is reasonably efficient for fairly large values of n. Although algorithms have been devised for problems of similar nature, e.g., the optimal assignment problem,''** little is known about the traveling-salesman problem. We do not claim that this note alters the situation very much; what we shall do is outline a way of approaching the problem that sometimes, at least, enables one to find an optimal path and prove it so. In particular, it will be shown that a certain arrangement of 49 cities, one m each of the 48 states and Washington, D. C, is best, the du used representing road distances as taken from an atlas. * HISTORICAL NOTE-The origin of this problem is somewhat obscure. It appears to have been discussed informally among mathematicians at mathematics meetings for many years. Surprisingly little in the way of results has appeared in the mathematical literature.'\" It may be that the minimal-distance tour problem was stimulated by the so-called Hamiltonian game' which is concerned with finding the number of different tours possible over a specified network The latter problem is cited by some as the origin of group theory and has some connections with the famou8 Four-Color Conjecture ' Merrill Flood (Columbia Universitj') should be credited with stimulating interest in the traveling-salesman problem in many quarters. As early as 1937, he tried to obtain near optimal solutions in reference to routing of school buses. Both Flood and A W. Tucker (Princeton University) recall that they heard about the problem first in a seminar talk by Hassler Whitney at Princeton in 1934 (although Whitney, …",
"title": ""
},
{
"docid": "b04ae3842293f5f81433afbaa441010a",
"text": "Rootkits Trojan virus, which can control attacked computers, delete import files and even steal password, are much popular now. Interrupt Descriptor Table (IDT) hook is rootkit technology in kernel level of Trojan. The paper makes deeply analysis on the IDT hooks handle procedure of rootkit Trojan according to previous other researchers methods. We compare its IDT structure and programs to find how Trojan interrupt handler code can respond the interrupt vector request in both real address mode and protected address mode. Finally, we analyze the IDT hook detection methods of rootkits Trojan by Windbg or other professional tools.",
"title": ""
},
{
"docid": "afd1bc554857a1857ac4be5ee37cc591",
"text": "0953-5438/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.intcom.2011.04.007 ⇑ Corresponding author. E-mail addresses: m.cole@rutgers.edu (M.J. Co (J. Gwizdka), changl@eden.rutgers.edu (C. Liu), ralf@b rutgers.edu (N.J. Belkin), xiangminz@gmail.com (X. Zh We report on an investigation into people’s behaviors on information search tasks, specifically the relation between eye movement patterns and task characteristics. We conducted two independent user studies (n = 32 and n = 40), one with journalism tasks and the other with genomics tasks. The tasks were constructed to represent information needs of these two different users groups and to vary in several dimensions according to a task classification scheme. For each participant we classified eye gaze data to construct models of their reading patterns. The reading models were analyzed with respect to the effect of task types and Web page types on reading eye movement patterns. We report on relationships between tasks and individual reading behaviors at the task and page level. Specifically we show that transitions between scanning and reading behavior in eye movement patterns and the amount of text processed may be an implicit indicator of the current task type facets. This may be useful in building user and task models that can be useful in personalization of information systems and so address design demands driven by increasingly complex user actions with information systems. One of the contributions of this research is a new methodology to model information search behavior and investigate information acquisition and cognitive processing in interactive information tasks. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "32731551289845c23452420fca121af5",
"text": "This work presents the current status of the Springrobot autonomous vehicle project, whose main objective is to develop a safety-warning and driver-assistance system and an automatic pilot for rural and urban traffic environments. This system uses a high precise digital map and a combination of various sensors. The architecture and strategy for the system are briefly described and the details of lane-marking detection algorithms are presented. The R and G channels of the color image are used to form graylevel images. The size of the resulting gray image is reduced and the Sobel operator with a very low threshold is used to get a grayscale edge image. In the adaptive randomized Hough transform, pixels of the gray-edge image are sampled randomly according to their weights corresponding to their gradient magnitudes. The three-dimensional (3-D) parametric space of the curve is reduced to the two-dimensional (2-D) and the one-dimensional (1-D) space. The paired parameters in two dimensions are estimated by gradient directions and the last parameter in one dimension is used to verify the estimated parameters by histogram. The parameters are determined coarsely and quantization accuracy is increased relatively by a multiresolution strategy. Experimental results in different road scene and a comparison with other methods have proven the validity of the proposed method.",
"title": ""
},
{
"docid": "22293b6953e2b28e1b3dc209649a7286",
"text": "The Liquid State Machine (LSM) has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons. Characteristic features of this new model are (i) that it is a model for adaptive computational systems, (ii) that it provides a method for employing randomly connected circuits, or even “found” physical objects for meaningful computations, (iii) that it provides a theoretical context where heterogeneous, rather than stereotypical, local gates or processors increase the computational power of a circuit, (iv) that it provides a method for multiplexing different computations (on a common input) within the same circuit. This chapter reviews the motivation for this model, its theoretical background, and current work on implementations of this model in innovative artificial computing devices.",
"title": ""
},
{
"docid": "2c4c7f8dcf1681e278183525d520fc8c",
"text": "In the course of studies on the isolation of bioactive compounds from Philippine plants, the seeds of Moringa oleifera Lam. were examined and from the ethanol extract were isolated the new O-ethyl-4-(alpha-L-rhamnosyloxy)benzyl carbamate (1) together with seven known compounds, 4(alpha-L-rhamnosyloxy)-benzyl isothiocyanate (2), niazimicin (3), niazirin (4), beta-sitosterol (5), glycerol-1-(9-octadecanoate) (6), 3-O-(6'-O-oleoyl-beta-D-glucopyranosyl)-beta-sitosterol (7), and beta-sitosterol-3-O-beta-D-glucopyranoside (8). Four of the isolates (2, 3, 7, and 8), which were obtained in relatively good yields, were tested for their potential antitumor promoting activity using an in vitro assay which tested their inhibitory effects on Epstein-Barr virus-early antigen (EBV-EA) activation in Raji cells induced by the tumor promoter, 12-O-tetradecanoyl-phorbol-13-acetate (TPA). All the tested compounds showed inhibitory activity against EBV-EA activation, with compounds 2, 3 and 8 having shown very significant activities. Based on the in vitro results, niazimicin (3) was further subjected to in vivo test and found to have potent antitumor promoting activity in the two-stage carcinogenesis in mouse skin using 7,12-dimethylbenz(a)anthracene (DMBA) as initiator and TPA as tumor promoter. From these results, niazimicin (3) is proposed to be a potent chemo-preventive agent in chemical carcinogenesis.",
"title": ""
},
{
"docid": "29d1c63a3267501805b564613043cc89",
"text": "INTRODUCTION\nOutcome data of penile traction therapy (PTT) for the acute phase (AP) of Peyronie's disease (PD) have not been specifically studied.\n\n\nAIM\nThe aim of this study was to assess the effectiveness of a penile extender device for the treatment of patients with AP of PD.\n\n\nMETHODS\nA total of 55 patients underwent PTT for 6 months and were compared with 41 patients with AP of PD who did not receive active treatment (\"no intervention group\" [NIG]).\n\n\nMAIN OUTCOMES MEASURES\nPre- and posttreatment variables included degree of curvature, penile length and girth, pain by 0-10 cm visual analog scale (VAS), erectile function (EF) domain of the International Index of Erectile Function questionnaire, Erection Hardness Scale, Sexual Encounter Profile 2 question, and penile sonographic evaluation (only patients in the intervention group).\n\n\nRESULTS\nThe mean curvature decreased from 33° at baseline to 15° at 6 months and 13° at 9 months with a mean decrease 20° (P < 0.05) in the PTT group. VAS score for pain decreased from 5.5 to 2.5 after 6 months (P < 0.05). EF and erection hardness also improved significantly. The percentage of patients who were not able to achieve penetration decreased from 62% to 20% (P < 0.03). In the NIG, deformity increased significantly, stretched flaccid penile length decreased, VAS score for pain increased, and EF and erection hardness worsened. PTT was associated with the disappearance of sonographic plaques in 48% of patients. Furthermore, the need for surgery was reduced in 40% of patients who would otherwise have been candidates for surgery and simplified the complexity of the surgical procedure (from grafting to plication) in one out of every three patients.\n\n\nCONCLUSIONS\nPTT seems an effective treatment for the AP of PD in terms of pain reduction, penile curvature decrease, and improvement in sexual function.",
"title": ""
},
{
"docid": "27eaa5fe0c9684337ce8b6da9de9a8ed",
"text": "When we observe someone performing an action, do our brains simulate making that action? Acquired motor skills offer a unique way to test this question, since people differ widely in the actions they have learned to perform. We used functional magnetic resonance imaging to study differences in brain activity between watching an action that one has learned to do and an action that one has not, in order to assess whether the brain processes of action observation are modulated by the expertise and motor repertoire of the observer. Experts in classical ballet, experts in capoeira and inexpert control subjects viewed videos of ballet or capoeira actions. Comparing the brain activity when dancers watched their own dance style versus the other style therefore reveals the influence of motor expertise on action observation. We found greater bilateral activations in premotor cortex and intraparietal sulcus, right superior parietal lobe and left posterior superior temporal sulcus when expert dancers viewed movements that they had been trained to perform compared to movements they had not. Our results show that this 'mirror system' integrates observed actions of others with an individual's personal motor repertoire, and suggest that the human brain understands actions by motor simulation.",
"title": ""
}
] | scidocsrr |
1e80ee62264da24896de5947e9a5e266 | "Ooh Aah... Just a Little Bit" : A Small Amount of Side Channel Can Go a Long Way | [
{
"docid": "bc8b40babfc2f16144cdb75b749e3a90",
"text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.",
"title": ""
}
] | [
{
"docid": "ba79dd4818facbf0cef50bb1422f43e6",
"text": "A nonlinear energy operator (NEO) gives an estimate of the energy content of a linear oscillator. This has been used to quantify the AM-FM modulating signals present in a sinusoid. Here, the authors give a new interpretation of NEO and extend its use in stochastic signals. They show that NEO accentuates the high-frequency content. This instantaneous nature of NEO and its very low computational burden make it an ideal tool for spike detection. The efficacy of the proposed method has been tested with simulated signals as well as with real electroencephalograms (EEGs).",
"title": ""
},
{
"docid": "fa04e8e2e263d18ee821c7aa6ebed08e",
"text": "In this study we examined the effect of physical activity based labels on the calorie content of meals selected from a sample fast food menu. Using a web-based survey, participants were randomly assigned to one of four menus which differed only in their labeling schemes (n=802): (1) a menu with no nutritional information, (2) a menu with calorie information, (3) a menu with calorie information and minutes to walk to burn those calories, or (4) a menu with calorie information and miles to walk to burn those calories. There was a significant difference in the mean number of calories ordered based on menu type (p=0.02), with an average of 1020 calories ordered from a menu with no nutritional information, 927 calories ordered from a menu with only calorie information, 916 calories ordered from a menu with both calorie information and minutes to walk to burn those calories, and 826 calories ordered from the menu with calorie information and the number of miles to walk to burn those calories. The menu with calories and the number of miles to walk to burn those calories appeared the most effective in influencing the selection of lower calorie meals (p=0.0007) when compared to the menu with no nutritional information provided. The majority of participants (82%) reported a preference for physical activity based menu labels over labels with calorie information alone and no nutritional information. Whether these labels are effective in real-life scenarios remains to be tested.",
"title": ""
},
{
"docid": "ce24b783f2157fdb4365b60aa2e6163a",
"text": "Geosciences is a field of great societal relevance that requires solutions to several urgent problems facing our humanity and the planet. As geosciences enters the era of big data, machine learning (ML)— that has been widely successful in commercial domains—offers immense potential to contribute to problems in geosciences. However, problems in geosciences have several unique challenges that are seldom found in traditional applications, requiring novel problem formulations and methodologies in machine learning. This article introduces researchers in the machine learning (ML) community to these challenges offered by geoscience problems and the opportunities that exist for advancing both machine learning and geosciences. We first highlight typical sources of geoscience data and describe their properties that make it challenging to use traditional machine learning techniques. We then describe some of the common categories of geoscience problems where machine learning can play a role, and discuss some of the existing efforts and promising directions for methodological development in machine learning. We conclude by discussing some of the emerging research themes in machine learning that are applicable across all problems in the geosciences, and the importance of a deep collaboration between machine learning and geosciences for synergistic advancements in both disciplines.",
"title": ""
},
{
"docid": "148d0709c58111c2f703f68d348c09af",
"text": "There has been tremendous growth in the use of mobile devices over the last few years. This growth has fueled the development of millions of software applications for these mobile devices often called as 'apps'. Current estimates indicate that there are hundreds of thousands of mobile app developers. As a result, in recent years, there has been an increasing amount of software engineering research conducted on mobile apps to help such mobile app developers. In this paper, we discuss current and future research trends within the framework of the various stages in the software development life-cycle: requirements (including non-functional), design and development, testing, and maintenance. While there are several non-functional requirements, we focus on the topics of energy and security in our paper, since mobile apps are not necessarily built by large companies that can afford to get experts for solving these two topics. For the same reason we also discuss the monetizing aspects of a mobile app at the end of the paper. For each topic of interest, we first present the recent advances done in these stages and then we present the challenges present in current work, followed by the future opportunities and the risks present in pursuing such research.",
"title": ""
},
{
"docid": "881a495a8329c71a0202c3510e21b15d",
"text": "We apply basic statistical reasoning to signal reconstruction by machine learning – learning to map corrupted observations to clean signals – with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans – all corrupted by different processes – based on noisy data only.",
"title": ""
},
{
"docid": "4a4a0dde01536789bd53ec180a136877",
"text": "CONTEXT\nCurrent assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into clinical practice.\n\n\nOBJECTIVES\nTo propose a definition of professional competence, to review current means for assessing it, and to suggest new approaches to assessment.\n\n\nDATA SOURCES\nWe searched the MEDLINE database from 1966 to 2001 and reference lists of relevant articles for English-language studies of reliability or validity of measures of competence of physicians, medical students, and residents.\n\n\nSTUDY SELECTION\nWe excluded articles of a purely descriptive nature, duplicate reports, reviews, and opinions and position statements, which yielded 195 relevant citations.\n\n\nDATA EXTRACTION\nData were abstracted by 1 of us (R.M.E.). Quality criteria for inclusion were broad, given the heterogeneity of interventions, complexity of outcome measures, and paucity of randomized or longitudinal study designs.\n\n\nDATA SYNTHESIS\nWe generated an inclusive definition of competence: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served. Aside from protecting the public and limiting access to advanced training, assessments should foster habits of learning and self-reflection and drive institutional change. Subjective, multiple-choice, and standardized patient assessments, although reliable, underemphasize important domains of professional competence: integration of knowledge and skills, context of care, information management, teamwork, health systems, and patient-physician relationships. Few assessments observe trainees in real-life situations, incorporate the perspectives of peers and patients, or use measures that predict clinical outcomes.\n\n\nCONCLUSIONS\nIn addition to assessments of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork promise a multidimensional assessment while maintaining adequate reliability and validity. Institutional support, reflection, and mentoring must accompany the development of assessment programs.",
"title": ""
},
{
"docid": "4f1111b33789e25ed896ad366f0d98de",
"text": "As an ubiquitous method in natural language processing, word embeddings are extensively employed to map semantic properties of words into a dense vector representation. They capture semantic and syntactic relations among words but the vector corresponding to the words are only meaningful relative to each other. Neither the vector nor its dimensions have any absolute, interpretable meaning. We introduce an additive modification to the objective function of the embedding learning algorithm that encourages the embedding vectors of words that are semantically related a predefined concept to take larger values along a specified dimension, while leaving the original semantic learning mechanism mostly unaffected. In other words, we align words that are already determined to be related, along predefined concepts. Therefore, we impart interpretability to the word embedding by assigning meaning to its vector dimensions. The predefined concepts are derived from an external lexical resource, which in this paper is chosen as Roget’s Thesaurus. We observe that alignment along the chosen concepts is not limited to words in the Thesaurus and extends to other related words as well. We quantify the extent of interpretability and assignment of meaning from our experimental results. We also demonstrate the preservation of semantic coherence of the resulting vector space by using word-analogy and word-similarity tests. These tests show that the interpretability-imparted word embeddings that are obtained by the proposed framework do not sacrifice performances in common benchmark tests.",
"title": ""
},
{
"docid": "f7de8256c3d556a298e12cb555dd50b8",
"text": "Intrusion Detection Systems (IDSs) detects the network attacks by self-learning, etc. (9). Using Genetic Algorithms for intrusion detection has. Cloud Computing Using Genetic Algorithm. 1. Ku. To overcome this problem we are implementing intrusion detection system in which we use genetic. From Ignite at OSCON 2010, a 5 minute presentation by Bill Lavender: SNORT is popular. Based Intrusion Detection System (IDS), by applying Genetic Algorithm (GA) and Networking Using Genetic Algorithm (IDS) and Decision Tree is to identify. Intrusion Detection System Using Genetic Algorithm >>>CLICK HERE<<< Genetic algorithm (GA) has received significant attention for the design and length chromosomes (VLCs) in a GA-based network intrusion detection system. The proposed approach is tested using Defense Advanced Research Project. Abstract. Intrusion Detection System (IDS) is one of the key security components in today's networking environment. A great deal of attention has been recently. Computer security has become an important part of the day today's life. Not only single computer systems but an extensive network of the computer system. presents an overview of intrusion detection system and a hybrid technique for",
"title": ""
},
{
"docid": "e13fc2c9f5aafc6c8eb1909592c07a70",
"text": "We introduce DropAll, a generalization of DropOut [1] and DropConnect [2], for regularization of fully-connected layers within convolutional neural networks. Applying these methods amounts to subsampling a neural network by dropping units. Training with DropOut, a randomly selected subset of activations are dropped, when training with DropConnect we drop a randomly subsets of weights. With DropAll we can perform both methods. We show the validity of our proposal by improving the classification error of networks trained with DropOut and DropConnect, on a common image classification dataset. To improve the classification, we also used a new method for combining networks, which was proposed in [3].",
"title": ""
},
{
"docid": "6427c3d11772ca84b6e1ad039d3abd33",
"text": "This paper proposes an algorithm that enables robots to efficiently learn human-centric models of their environment from natural language descriptions. Typical semantic mapping approaches augment metric maps with higher-level properties of the robot’s surroundings (e.g., place type, object locations), but do not use this information to improve the metric map. The novelty of our algorithm lies in fusing high-level knowledge, conveyed by speech, with metric information from the robot’s low-level sensor streams. Our method jointly estimates a hybrid metric, topological, and semantic representation of the environment. This semantic graph provides a common framework in which we integrate concepts from natural language descriptions (e.g., labels and spatial relations) with metric observations from low-level sensors. Our algorithm efficiently maintains a factored distribution over semantic graphs based upon the stream of natural language and low-level sensor information. We evaluate the algorithm’s performance and demonstrate that the incorporation of information from natural language increases the metric, topological and semantic accuracy of the recovered environment model.",
"title": ""
},
{
"docid": "fd317c492ed68bf14bdef38c27ed6696",
"text": "The systematic study of subcellular location patterns is required to fully characterize the human proteome, as subcellular location provides critical context necessary for understanding a protein's function. The analysis of tens of thousands of expressed proteins for the many cell types and cellular conditions under which they may be found creates a need for automated subcellular pattern analysis. We therefore describe the application of automated methods, previously developed and validated by our laboratory on fluorescence micrographs of cultured cell lines, to analyze subcellular patterns in tissue images from the Human Protein Atlas. The Atlas currently contains images of over 3000 protein patterns in various human tissues obtained using immunohistochemistry. We chose a 16 protein subset from the Atlas that reflects the major classes of subcellular location. We then separated DNA and protein staining in the images, extracted various features from each image, and trained a support vector machine classifier to recognize the protein patterns. Our results show that our system can distinguish the patterns with 83% accuracy in 45 different tissues, and when only the most confident classifications are considered, this rises to 97%. These results are encouraging given that the tissues contain many different cell types organized in different manners, and that the Atlas images are of moderate resolution. The approach described is an important starting point for automatically assigning subcellular locations on a proteome-wide basis for collections of tissue images such as the Atlas.",
"title": ""
},
{
"docid": "fcc434f43baae2cb1dbddd2f76fb9c7f",
"text": "For medical diagnoses and treatments, it is often desirable to wirelessly trace an object that moves inside the human body. A magnetic tracing technique suggested for such applications uses a small magnet as the excitation source, which does not require the power supply and connection wire. It provides good tracing accuracy and can be easily implemented. As the magnet moves, it establishes around the human body a static magnetic field, whose intensity is related to the magnet's 3-D position and 2-D orientation parameters. With magnetic sensors, these magnetic intensities can be detected in some predetermined spatial points, and the position and orientation parameters can be computed. Typically, a nonlinear optimization algorithm is applied to such a problem, but a linear algorithm is preferable for faster, more reliable computation, and lower complexity. In this paper, we propose a linear algorithm to determine the 5-D magnet's position and orientation parameters. With the data from five (or more) three-axis magnetic sensors, this algorithm results in a solution by the matrix and algebra computations. We applied this linear algorithm on the real localization system, and the results of simulations and real experiments show that satisfactory tracing accuracy can be achieved by using a sensor array with enough three-axis magnetic sensors.",
"title": ""
},
{
"docid": "2c14b3968aadadaa62f569acccb37d46",
"text": "The main objective of this paper is to review the technologies and models used in the Automatic music transcription system. Music Information Retrieval is a key problem in the field of music signal analysis and this can be achieved with the use of music transcription systems. It has proven to be a very difficult issue because of the complex and deliberately overlapped spectral structure of musical harmonies. Generally, the music transcription systems branched as automatic and semi-automatic approaches based on the user interventions needed in the transcription system. Among these we give a close view of the automatic music transcription systems. Different models and techniques were proposed so far in the automatic music transcription systems. However the performance of the systems derived till now not completely matched to the performance of a human expert. In this paper we go through the techniques used previously for the music transcription and discuss the limitations with them. Also, we give some directions for the enhancement of the music transcription system and this can be useful for the researches to develop fully automatic music transcription system.",
"title": ""
},
{
"docid": "cab8928a995b0cd2becb653155ecd8d9",
"text": "Inclusive in the engineering factors of growth of the economy of any country is the construction industry, of which Malaysia as a nation is not left out. In spite of its significant contribution, the industry is known to be an accident-prone consequent upon the dangerous activities taking place at the construction stage. However, occupational accidents of diverse categories do take place on the construction sites resulting in fatal and non-fatal injuries. This study was embarked upon by giving consideration to thirty fatal cases of accident that occurred in Malaysia during a period of fourteen months (September, 2015–October, 2016), with the reports extracted from the database of Department of Safety and Health (DOSH) in Malaysia. The research was aimed at discovering the types (categories) of fatal accident on the construction sites, with attention also given to the causes of the accidents. In achieving this, thirty cases were descriptively analysed, and availing a revelation of falls from height as the leading category of accident, and electrocution as the second, while the causative factors were discovered to be lack of compliance of workers to safe work procedures and nonchalant attitude towards harnessing themselves with personal protective equipment (PPE). Consequent upon the discovery through analysis, and an effort to avert subsequent accidents in order to save lives of construction workers it is recommended that the management should enforce the compliance of workers to safe work procedures and the compulsory use of PPE during operations, while the DOSH should embark on warding round the construction sites for inspection and giving a sanction to contractors failing to enforce compliance with safety regulations. Keywords— Construction Industry, Accident, Construction Site, Injuries, Safety",
"title": ""
},
{
"docid": "8f0276f7a902fa02b6236dfc76b882d2",
"text": "Support Vector Machines (SVMs) have successfully shown efficiencies in many areas such as text categorization. Although recommendation systems share many similarities with text categorization, the performance of SVMs in recommendation systems is not acceptable due to the sparsity of the user-item matrix. In this paper, we propose a heuristic method to improve the predictive accuracy of SVMs by repeatedly correcting the missing values in the user-item matrix. The performance comparison to other algorithms has been conducted. The experimental studies show that the accurate rates of our heuristic method are the highest.",
"title": ""
},
{
"docid": "e236a7cd184bbd09c9ffd90ad4cfd636",
"text": "It has been a challenge for financial economists to explain some stylized facts observed in securities markets, among them, high levels of trading volume. The most prominent explanation of excess volume is overconfidence. High market returns make investors overconfident and as a consequence, these investors trade more subsequently. The aim of our paper is to study the impact of the phenomenon of overconfidence on the trading volume and its role in the formation of the excess volume on the Tunisian stock market. Based on the work of Statman, Thorley and Vorkink (2006) and by using VAR models and impulse response functions, we find little evidence of the overconfidence hypothesis when we use volume (shares traded) as proxy of trading volume.",
"title": ""
},
{
"docid": "6c6e4e776a3860d1df1ccd7af7f587d5",
"text": "We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matching statistics of distributions embedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call McGan. McGan minimizes a meaningful loss between distributions.",
"title": ""
},
{
"docid": "8f0ac7417daf0c995263274738dcbb13",
"text": "Technology platform strategies offer a novel way to orchestrate a rich portfolio of contributions made by the many independent actors who form an ecosystem of heterogeneous complementors around a stable platform core. This form of organising has been successfully used in the smartphone, gaming, commercial software, and other industrial sectors. While technology ecosystems require stability and homogeneity to leverage common investments in standard components, they also need variability and heterogeneity to meet evolving market demand. Although the required balance between stability and evolvability in the ecosystem has been addressed conceptually in the literature, we have less understanding of its underlying mechanics or appropriate governance. Through an extensive case study of a business software ecosystem consisting of a major multinational manufacturer of enterprise resource planning (ERP) software at the core, and a heterogeneous system of independent implementation partners and solution developers on the periphery, our research identifies three salient tensions that characterize the ecosystem: standard-variety; control-autonomy; and collective-individual. We then highlight the specific ecosystem governance mechanisms designed to simultaneously manage desirable and undesirable variance across each tension. Paradoxical tensions may manifest as dualisms, where actors are faced with contradictory and disabling „either/or‟ decisions. Alternatively, they may manifest as dualities, where tensions are framed as complementary and mutually-enabling. We identify conditions where latent, mutually enabling tensions become manifest as salient, disabling tensions. By identifying conditions in which complementary logics are overshadowed by contradictory logics, our study further contributes to the understanding of the dynamics of technology ecosystems, as well as the effective design of technology ecosystem governance that can explicitly embrace paradoxical tensions towards generative outcomes.",
"title": ""
},
{
"docid": "5e8fbfec1ff5bf432dbaadaf13c9ca75",
"text": "Multiple studies have illustrated the potential for dramatic societal, environmental and economic benefits from significant penetration of autonomous driving. However, all the current approaches to autonomous driving require the automotive manufacturers to shoulder the primary responsibility and liability associated with replacing human perception and decision making with automation, potentially slowing the penetration of autonomous vehicles, and consequently slowing the realization of the societal benefits of autonomous vehicles. We propose here a new approach to autonomous driving that will re-balance the responsibility and liabilities associated with autonomous driving between traditional automotive manufacturers, private infrastructure players, and third-party players. Our proposed distributed intelligence architecture leverages the significant advancements in connectivity and edge computing in the recent decades to partition the driving functions between the vehicle, edge computers on the road side, and specialized third-party computers that reside in the vehicle. Infrastructure becomes a critical enabler for autonomy. With this Infrastructure Enabled Autonomy (IEA) concept, the traditional automotive manufacturers will only need to shoulder responsibility and liability comparable to what they already do today, and the infrastructure and third-party players will share the added responsibility and liabilities associated with autonomous functionalities. We propose a Bayesian Network Model based framework for assessing the risk benefits of such a distributed intelligence architecture. An additional benefit of the proposed architecture is that it enables “autonomy as a service” while still allowing for private ownership of automobiles.",
"title": ""
},
{
"docid": "af752d0de962449acd9a22608bd7baba",
"text": "Ð R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.",
"title": ""
}
] | scidocsrr |
c13d3e96ac7a5df8c96bc0de66a33a1f | Fine-Grained Image Search | [
{
"docid": "9c47b068f7645dc5464328e80be24019",
"text": "In this paper we propose a highly effective and scalable framework for recognizing logos in images. At the core of our approach lays a method for encoding and indexing the relative spatial layout of local features detected in the logo images. Based on the analysis of the local features and the composition of basic spatial structures, such as edges and triangles, we can derive a quantized representation of the regions in the logos and minimize the false positive detections. Furthermore, we propose a cascaded index for scalable multi-class recognition of logos.\n For the evaluation of our system, we have constructed and released a logo recognition benchmark which consists of manually labeled logo images, complemented with non-logo images, all posted on Flickr. The dataset consists of a training, validation, and test set with 32 logo-classes. We thoroughly evaluate our system with this benchmark and show that our approach effectively recognizes different logo classes with high precision.",
"title": ""
}
] | [
{
"docid": "24a23aff0026141d1b6970e8216347f8",
"text": "Internet of Things (IoT) is a technology paradigm where millions of sensors monitor, and help inform or manage, physical, environmental and human systems in real-time. The inherent closed-loop responsiveness and decision making of IoT applications makes them ideal candidates for using low latency and scalable stream processing platforms. Distributed Stream Processing Systems (DSPS) are becoming essential components of any IoT stack, but the efficacy and performance of contemporary DSPS have not been rigorously studied for IoT data streams and applications. Here, we develop a benchmark suite and performance metrics to evaluate DSPS for streaming IoT applications. The benchmark includes 13 common IoT tasks classified across various functional categories and forming micro-benchmarks, and two IoT applications for statistical summarization and predictive analytics that leverage various dataflow compositional features of DSPS. These are coupled with stream workloads sourced from real IoT observations from smart cities. We validate the IoT benchmark for the popular Apache Storm DSPS, and present empirical results.",
"title": ""
},
{
"docid": "f8742208fef05beb86d77f1d5b5d25ef",
"text": "The latest book on Genetic Programming, Poli, Langdon and McPhee’s (with contributions from John R. Koza) A Field Guide to Genetic Programming represents an exciting landmark with the authors choosing to make their work freely available by publishing using a form of the Creative Commons License[1]. In so doing they have created a must-read resource which is, to use their words, ’aimed at both newcomers and old-timers’. The book is freely available from the authors companion website [2] and Lulu.com [3] in both pdf and html form. For those who desire the more traditional page turning exercise, inexpensive printed copies can be ordered from Lulu.com. The Field Guides companion website also provides a link to the TinyGP code printed over eight pages of Appendix B, and a Discussion Group centered around the book. The book is divided into four parts with fourteen chapters and two appendices. Part I introduces the basics of Genetic Programming, Part II overviews more advanced topics, Part III highlights some of the real world applications and discusses issues facing the GP researcher or practitioner, while Part IV contains two appendices, the first introducing some key resources and the second appendix describes the TinyGP code. The pdf and html forms of the book have an especially useful feature, providing links to the articles available on-line at the time of publication, and to bibtex entries of the GP Bibliography. Following an overview of the book in chapter 1, chapter 2 introduces the basic concepts of GP focusing on the tree representation, initialisation, selection, and the search operators. Chapter 3 is centered around the preparatory steps in applying GP to a problem, which is followed by an outline of a sample run of GP on a simple instance of symbolic regression in Chapter 4. Overall these chapters provide a compact and useful introduction to GP. The first of the Advanced GP chapters in Part II looks at alternative strategies for initialisation and the search operators for tree-based GP. An overview of Modular, Grammatical and Developmental GP is provided in Chapter 6. While the chapter title",
"title": ""
},
{
"docid": "d258a14fc9e64ba612f2c8ea77f85d08",
"text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.",
"title": ""
},
{
"docid": "2ee0eb9ab9d6c5b9bdad02b9f95c8691",
"text": "Aim: To describe lower extremity injuries for badminton in New Zealand. Methods: Lower limb badminton injuries that resulted in claims accepted by the national insurance company Accident Compensation Corporation (ACC) in New Zealand between 2006 and 2011 were reviewed. Results: The estimated national injury incidence for badminton injuries in New Zealand from 2006 to 2011 was 0.66%. There were 1909 lower limb badminton injury claims which cost NZ$2,014,337 (NZ$ value over 2006 to 2011). The age-bands frequently injured were 10–19 (22%), 40–49 (22%), 30–39 (14%) and 50–59 (13%) years. Sixty five percent of lower limb injuries were knee ligament sprains/tears. Males sustained more cruciate ligament sprains than females (75 vs. 39). Movements involving turning, changing direction, shifting weight, pivoting or twisting were responsible for 34% of lower extremity injuries. Conclusion: The knee was most frequently OPEN ACCESS",
"title": ""
},
{
"docid": "0cb3cdb1e44fd9171156ad46fdf2d2ed",
"text": "In this paper, from the viewpoint of scene under standing, a three-layer Bayesian hierarchical framework (BHF) is proposed for robust vacant parking space detection. In practice, the challenges of vacant parking space inference come from dramatic luminance variations, shadow effect, perspective distortion, and the inter-occlusion among vehicles. By using a hidden labeling layer between an observation layer and a scene layer, the BHF provides a systematic generative structure to model these variations. In the proposed BHF, the problem of luminance variations is treated as a color classification problem and is tack led via a classification process from the observation layer to the labeling layer, while the occlusion pattern, perspective distortion, and shadow effect are well modeled by the relationships between the scene layer and the labeling layer. With the BHF scheme, the detection of vacant parking spaces and the labeling of scene status are regarded as a unified Bayesian optimization problem subject to a shadow generation model, an occlusion generation model, and an object classification model. The system accuracy was evaluated by using outdoor parking lot videos captured from morning to evening. Experimental results showed that the proposed framework can systematically determine the vacant space number, efficiently label ground and car regions, precisely locate the shadowed regions, and effectively tackle the problem of luminance variations.",
"title": ""
},
{
"docid": "8fe5ad58edf4a1c468fd0b6a303729ee",
"text": "Das CDISC Operational Data Model (ODM) ist ein populärer Standard in klinischen Datenmanagementsystemen (CDMS). Er beschreibt sowohl die Struktur einer klinischen Prüfung inklusive der Visiten, Formulare, Datenele mente und Codelisten als auch administrative Informationen wie gültige Nutzeracco unts. Ferner enthält er alle erhobenen klinischen Fakten über die Pro banden. Sein originärer Einsatzzweck liegt in der Archivierung von Studiendatenbanken und dem Austausch klinischer Daten zwischen verschiedenen CDMS. Aufgrund de r reichhaltigen Struktur eignet er sich aber auch für weiterführende Anwendungsfälle. Im Rahmen studentischer Praktika wurden verschied ene Szenarien für funktionale Ergänzungen des freien CDMS OpenClinica unters ucht und implementiert, darunter die Generierung eines Annotated CRF, der Import vo n Studiendaten per Web-Service, das semiautomatisierte Anlegen von Studien so wie der Export von Studiendaten in einen relationalen Data Mart und in ein Forschungs-Data-Warehouse auf Basis von i2b2.",
"title": ""
},
{
"docid": "81b2a039a391b5f2c1a9a15c94f1f67e",
"text": "Evolution of resistance in pests can reduce the effectiveness of insecticidal proteins from Bacillus thuringiensis (Bt) produced by transgenic crops. We analyzed results of 77 studies from five continents reporting field monitoring data for resistance to Bt crops, empirical evaluation of factors affecting resistance or both. Although most pest populations remained susceptible, reduced efficacy of Bt crops caused by field-evolved resistance has been reported now for some populations of 5 of 13 major pest species examined, compared with resistant populations of only one pest species in 2005. Field outcomes support theoretical predictions that factors delaying resistance include recessive inheritance of resistance, low initial frequency of resistance alleles, abundant refuges of non-Bt host plants and two-toxin Bt crops deployed separately from one-toxin Bt crops. The results imply that proactive evaluation of the inheritance and initial frequency of resistance are useful for predicting the risk of resistance and improving strategies to sustain the effectiveness of Bt crops.",
"title": ""
},
{
"docid": "596bb1265a375c68f0498df90f57338e",
"text": "The concept of unintended pregnancy has been essential to demographers in seeking to understand fertility, to public health practitioners in preventing unwanted childbear-ing and to both groups in promoting a woman's ability to determine whether and when to have children. Accurate measurement of pregnancy intentions is important in understanding fertility-related behaviors, forecasting fertility, estimating unmet need for contraception, understanding the impact of pregnancy intentions on maternal and child health, designing family planning programs and evaluating their effectiveness, and creating and evaluating community-based programs that prevent unintended pregnancy. 1 Pregnancy unintendedness is a complex concept, and has been the subject of recent conceptual and method-ological critiques. 2 Pregnancy intentions are increasingly viewed as encompassing affective, cognitive, cultural and contextual dimensions. Developing a more complete understanding of pregnancy intentions should advance efforts to increase contraceptive use, to prevent unintended pregnancies and to improve the health of women and their children. To provide a scientific foundation for public health efforts to prevent unintended pregnancy, we conducted a review of unintended pregnancy between the fall of 1999 and the spring of 2001 as part of strategic planning activities within the Division of Reproductive Health at the Centers for Disease Control and Prevention (CDC). We reviewed the published and unpublished literature, consulted with experts in reproductive health and held several joint meetings with the Demographic and Behavioral Research Branch of the National Institute of Child Health and Human Development , and the Office of Population Affairs of the Department of Health and Human Services. We used standard scientific search engines, such as Medline, to find relevant articles published since 1975, and identified older references from bibliographies contained in recent articles; academic experts and federal officials helped to identify unpublished reports. This comment summarizes our findings and incorporates insights gained from the joint meetings and the strategic planning process. CURRENT DEFINITIONS AND MEASURES Conventional measures of unintended pregnancy are designed to reflect a woman's intentions before she became pregnant. 3 Unintended pregnancies are pregnancies that are reported to have been either unwanted (i.e., they occurred when no children, or no more children, were desired) or mistimed (i.e., they occurred earlier than desired). In contrast, pregnancies are described as intended if they are reported to have happened at the \" right time \" 4 or later than desired (because of infertility or difficulties in conceiving). A concept related to unintended pregnancy is unplanned pregnancy—one that occurred when …",
"title": ""
},
{
"docid": "0e53caa9c6464038015a6e83b8953d92",
"text": "Many interactive rendering algorithms require operations on multiple fragments (i.e., ray intersections) at the same pixel location: however, current Graphics Processing Units (GPUs) capture only a single fragment per pixel. Example effects include transparency, translucency, constructive solid geometry, depth-of-field, direct volume rendering, and isosurface visualization. With current GPUs, programmers implement these effects using multiple passes over the scene geometry, often substantially limiting performance. This paper introduces a generalization of the Z-buffer, called the k-buffer, that makes it possible to efficiently implement such algorithms with only a single geometry pass, yet requires only a small, fixed amount of additional memory. The k-buffer uses framebuffer memory as a read-modify-write (RMW) pool of k entries whose use is programmatically defined by a small k-buffer program. We present two proposals for adding k-buffer support to future GPUs and demonstrate numerous multiple-fragment, single-pass graphics algorithms running on both a software-simulated k-buffer and a k-buffer implemented with current GPUs. The goal of this work is to demonstrate the large number of graphics algorithms that the k-buffer enables and that the efficiency is superior to current multipass approaches.",
"title": ""
},
{
"docid": "b64a2e6bb533043a48b7840b72f71331",
"text": "Autonomous long range navigation in partially known planetary-like terrain is an open challenge for robotics. Navigating several hundreds of meters without any human intervention requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to localize itself as it moves, and to schedule, start, control and interrupt these various activities. In this paper, we brie y describe some functionalities that are currently running on board the Marsokhod model robot Lama at LAAS/CNRS. We then focus on the necessity to integrate various instances of the perception and decision functionalities, and on the di culties raised by this integration.",
"title": ""
},
{
"docid": "b990e62cb73c0f6c9dd9d945f72bb047",
"text": "Admissible heuristics are an important class of heuristics worth discovering: they guarantee shortest path solutions in search algorithms such asA* and they guarantee less expensively produced, but boundedly longer solutions in search algorithms such as dynamic weighting. Unfortunately, effective (accurate and cheap to compute) admissible heuristics can take years for people to discover. Several researchers have suggested that certain transformations of a problem can be used to generate admissible heuristics. This article defines a more general class of transformations, calledabstractions, that are guaranteed to generate only admissible heuristics. It also describes and evaluates an implemented program (Absolver II) that uses a means-ends analysis search control strategy to discover abstracted problems that result in effective admissible heuristics. Absolver II discovered several well-known and a few novel admissible heuristics, including the first known effective one for Rubik's Cube, thus concretely demonstrating that effective admissible heuristics can be tractably discovered by a machine.",
"title": ""
},
{
"docid": "bee4b2dfab47848e8429d4b4617ec9e5",
"text": "Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).",
"title": ""
},
{
"docid": "e840e1e77a8a5c2c187c79eda9143ade",
"text": "The aim of this study is to find out the customer’s satisfaction with Yemeni Mobile service providers. Th is study examined the relationship between perceived quality, perceived value, customer expectation, and corporate image with customer satisfaction. The result of this study is based on data gathered online from 118 academic staff in public universit ies in Yemen. The study found that the relationship between perceived value, perceived quality and corporate image have a significant positive influence on customer satisfaction, whereas customer expectation has positive but without statistical significance.",
"title": ""
},
{
"docid": "41fe7d2febb05a48daf69b4a41c77251",
"text": "Multi-objective evolutionary algorithms for the construction of neural ensembles is a relatively new area of research. We recently proposed an ensemble learning algorithm called DIVACE (DIVerse and ACcurate Ensemble learning algorithm). It was shown that DIVACE tries to find an optimal trade-off between diversity and accuracy as it searches for an ensemble for some particular pattern recognition task by treating these two objectives explicitly separately. A detailed discussion of DIVACE together with further experimental studies form the essence of this paper. A new diversity measure which we call Pairwise Failure Crediting (PFC) is proposed. This measure forms one of the two evolutionary pressures being exerted explicitly in DIVACE. Experiments with this diversity measure as well as comparisons with previously studied approaches are hence considered. Detailed analysis of the results show that DIVACE, as a concept, has promise. Mathematical Subject Classification (2000): 68T05, 68Q32, 68Q10.",
"title": ""
},
{
"docid": "3172304147c13068b6cec8fd252cda5e",
"text": "Widespread growth of open wireless hotspots has made it easy to carry out man-in-the-middle attacks and impersonate web sites. Although HTTPS can be used to prevent such attacks, its universal adoption is hindered by its performance cost and its inability to leverage caching at intermediate servers (such as CDN servers and caching proxies) while maintaining end-to-end security. To complement HTTPS, we revive an old idea from SHTTP, a protocol that offers end-to-end web integrity without confidentiality. We name the protocol HTTPi and give it an efficient design that is easy to deploy for today’s web. In particular, we tackle several previously-unidentified challenges, such as supporting progressive page loading on the client’s browser, handling mixed content, and defining access control policies among HTTP, HTTPi, and HTTPS content from the same domain. Our prototyping and evaluation experience show that HTTPi incurs negligible performance overhead over HTTP, can leverage existing web infrastructure such as CDNs or caching proxies without any modifications to them, and can make many of the mixed-content problems in existing HTTPS web sites easily go away. Based on this experience, we advocate browser and web server vendors to adopt HTTPi.",
"title": ""
},
{
"docid": "d7ebfe6e0f0fa07c5e22d24c69aca13e",
"text": "Malware programs that incorporate trigger-based behavior initiate malicious activities based on conditions satisfied only by specific inputs. State-of-the-art malware analyzers discover code guarded by triggers via multiple path exploration, symbolic execution, or forced conditional execution, all without knowing the trigger inputs. We present a malware obfuscation technique that automatically conceals specific trigger-based behavior from these malware analyzers. Our technique automatically transforms a program by encrypting code that is conditionally dependent on an input value with a key derived from the input and then removing the key from the program. We have implemented a compiler-level tool that takes a malware source program and automatically generates an obfuscated binary. Experiments on various existing malware samples show that our tool can hide a significant portion of trigger based code. We provide insight into the strengths, weaknesses, and possible ways to strengthen current analysis approaches in order to defeat this malware obfuscation technique.",
"title": ""
},
{
"docid": "1e306a31f5a9becadc267a895be40335",
"text": "Knowledge has been lately recognized as one of the most important assets of organizations. Can information technology help the growth and the sustainment of organizational knowledge? The answer is yes, if care is taken to remember that IT here is just a part of the story (corporate culture and work practices being equally relevant) and that the information technologies best suited for this purpose should be expressly designed with knowledge management in view. This special issue of the Journal of Universal Computer Science contains a selection f papers from the First Conference on Practical Applications of Knowledge Management. Each paper describes a specific type of information technology suitable for the support of different aspects of knowledge management.",
"title": ""
},
{
"docid": "3160dea1a6ebd67d57c0d304e17f4882",
"text": "A Concept Inventory (CI) is a set of multiple choice questions used to reveal student's misconceptions related to some topic. Each available choice (besides the correct choice) is a distractor that is carefully developed to address a specific misunderstanding, a student wrong thought. In computer science introductory programming courses, the development of CIs is still beginning, with many topics requiring further study and analysis. We identify, through analysis of open-ended exams and instructor interviews, introductory programming course misconceptions related to function parameter use and scope, variables, recursion, iteration, structures, pointers and boolean expressions. We categorize these misconceptions and define high-quality distractors founded in words used by students in their responses to exam questions. We discuss the difficulty of assessing introductory programming misconceptions independent of the syntax of a language and we present a detailed discussion of two pilot CIs related to parameters: an open-ended question (to help identify new misunderstandings) and a multiple choice question with suggested distractors that we identified.",
"title": ""
},
{
"docid": "8d890dba24fc248ee37653aad471713f",
"text": "We consider the problem of constructing a spanning tree for a graph G = (V,E) with n vertices whose maximal degree is the smallest among all spanning trees of G. This problem is easily shown to be NP-hard. We describe an iterative polynomial time approximation algorithm for this problem. This algorithm computes a spanning tree whose maximal degree is at most O(Δ + log n), where Δ is the degree of some optimal tree. The result is generalized to the case where only some vertices need to be connected (Steiner case) and to the case of directed graphs. It is then shown that our algorithm can be refined to produce a spanning tree of degree at most Δ + 1. Unless P = NP, this is the best bound achievable in polynomial time.",
"title": ""
},
{
"docid": "87eb69d6404bf42612806a5e6d67e7bb",
"text": "In this paper we present an analysis of an AltaVista Search Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents almost 285 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplication, and query sessions. We also present results of a correlation analysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ significantly from the user assumed in the standard information retrieval literature. Specifically, we show that web users type in short queries, mostly look at the first 10 results only, and seldom modify the query. This suggests that traditional information retrieval techniques may not work well for answering web search requests. The correlation analysis showed that the most highly correlated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if the user did not explicitly specify them as such.",
"title": ""
}
] | scidocsrr |
bb02d18b5e6d6ed00e90bfd82d79ce56 | Deep Video Color Propagation | [
{
"docid": "0d2e9d514586f083007f5e93d8bb9844",
"text": "Recovering Matches: Analysis-by-Synthesis Results Starting point: Unsupervised learning of image matching Applications: Feature matching, structure from motion, dense optical flow, recognition, motion segmentation, image alignment Problem: Difficult to do directly (e.g. based on video) Insights: Image matching is a sub-problem of frame interpolation Frame interpolation can be learned from natural video sequences",
"title": ""
},
{
"docid": "b401c0a7209d98aea517cf0e28101689",
"text": "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.",
"title": ""
}
] | [
{
"docid": "4ea81c5e995d074998ba34a820c3de1c",
"text": "We address the delicate problem of offsetting polygonal meshes. Offsetting is important for stereolithography, NC machining, rounding corners, collision avoidance, and Hausdorff error calculation. We introduce a new fast, and very simple method for offsetting (growing and shrinking) a solid model by arbitrary distance r. Our approach is based on a hybrid data structure combining point samples, voxels, and continuous surfaces. Each face, edge, and vertex of the original solid generate a set of offset points spaced along the (pencil of) normals associated with it. The offset points and normals are sufficiently dense to ensure that all voxels between the original and the offset surfaces are properly labeled as either too close to the original solid or possibly containing the offset surface. Then the offset boundary is generated as the isosurface using these voxels and the associated offset points. We provide a tight error bound on the resulting surface and report experimental results on a variety of CAD models.",
"title": ""
},
{
"docid": "6b9d5cbdf91d792d60621da0bb45a303",
"text": "AR systems pose potential security concerns that should be addressed before the systems become widespread.",
"title": ""
},
{
"docid": "2a76205b80c90ff9a4ca3ccb0434bb03",
"text": "Finding out which e-shops offer a specific product is a central challenge for building integrated product catalogs and comparison shopping portals. Determining whether two offers refer to the same product involves extracting a set of features (product attributes) from the web pages containing the offers and comparing these features using a matching function. The existing gold standards for product matching have two shortcomings: (i) they only contain offers from a small number of e-shops and thus do not properly cover the heterogeneity that is found on the Web. (ii) they only provide a small number of generic product attributes and therefore cannot be used to evaluate whether detailed product attributes have been correctly extracted from textual product descriptions. To overcome these shortcomings, we have created two public gold standards: The WDC Product Feature Extraction Gold Standard consists of over 500 product web pages originating from 32 different websites on which we have annotated all product attributes (338 distinct attributes) which appear in product titles, product descriptions, as well as tables and lists. The WDC Product Matching Gold Standard consists of over 75 000 correspondences between 150 products (mobile phones, TVs, and headphones) in a central catalog and offers for these products on the 32 web sites. To verify that the gold standards are challenging enough, we ran several baseline feature extraction and matching methods, resulting in F-score values in the range 0.39 to 0.67. In addition to the gold standards, we also provide a corpus consisting of 13 million product pages from the same websites which might be useful as background knowledge for training feature extraction and matching methods.",
"title": ""
},
{
"docid": "6eed03674521ecf9a558ab0059fc167f",
"text": "University professors traditionally struggle to incorporate software testing into their course curriculum. Worries include double-grading for correctness of both source and test code and finding time to teach testing as a topic. Test-driven development (TDD) has been suggested as a possible solution to improve student software testing skills and to realize the benefits of testing. According to most existing studies, TDD improves software quality and student productivity. This paper surveys the current state of TDD experiments conducted exclusively at universities. Similar surveys compare experiments in both the classroom and industry, but none have focused strictly on academia.",
"title": ""
},
{
"docid": "1eb30a6cf31e5c256b9d1ca091e532cc",
"text": "The aim of this study was to evaluate the range of techniques used by radiologists performing shoulder, hip, and knee arthrography using fluoroscopic guidance. Questionnaires on shoulder, hip, and knee arthrography were distributed to radiologists at a national radiology meeting. We enquired regarding years of experience, preferred approaches, needle gauge, gadolinium dilution, and volume injected. For each approach, the radiologist was asked their starting and end needle position based on a numbered and lettered grid superimposed on a radiograph. Sixty-eight questionnaires were returned. Sixty-eight radiologists performed shoulder and hip arthrography, and 65 performed knee arthrograms. Mean experience was 13.5 and 12.8 years, respectively. For magnetic resonance arthrography, a gadolinium dilution of 1/200 was used by 69–71%. For shoulder arthrography, an anterior approach was preferred by 65/68 (96%). The most common site of needle end position, for anterior and posterior approaches, was immediately lateral to the humeral cortex. A 22-gauge needle was used by 46/66 (70%). Mean injected volume was 12.7 ml (5–30). For hip arthrography, an anterior approach was preferred by 51/68 (75%). The most common site of needle end position, for anterior and lateral approaches, was along the lateral femoral head/neck junction. A 22-gauge needle was used by 53/68 (78%). Mean injected volume was 11.5 ml (5–20). For knee arthrography, a lateral approach was preferred by 41/64 (64%). The most common site of needle end position, for lateral and medial approaches, was mid-patellofemoral joint level. A 22-gauge needle was used by 36/65 (56%). Mean injected volume was 28.2 ml (5–60). Arthrographic approaches for the shoulder, hip, and knee vary among radiologists over a wide range of experience levels.",
"title": ""
},
{
"docid": "a1118a6310736fc36dbc70bd25bd5f28",
"text": "Many studies have documented large and persistent productivity differences across producers, even within narrowly defined industries. This paper both extends and departs from the past literature, which focused on technological explanations for these differences, by proposing that demand-side features also play a role in creating the observed productivity variation. The specific mechanism investigated here is the effect of spatial substitutability in the product market. When producers are densely clustered in a market, it is easier for consumers to switch between suppliers (making the market in a certain sense more competitive). Relatively inefficient producers find it more difficult to operate profitably as a result. Substitutability increases truncate the productivity distribution from below, resulting in higher minimum and average productivity levels as well as less productivity dispersion. The paper presents a model that makes this process explicit and empirically tests it using data from U.S. ready-mixed concrete plants, taking advantage of geographic variation in substitutability created by the industry’s high transport costs. The results support the model’s predictions and appear robust. Markets with high demand density for ready-mixed concrete—and thus high concrete plant densities—have higher lower-bound and average productivity levels and exhibit less productivity dispersion among their producers.",
"title": ""
},
{
"docid": "8b156fb8ced52d0135e8a80361d93757",
"text": "Memcached is one of the world's largest key-value deployments. This article analyzes the Memcached workload at Facebook, looking at server-side performance, request composition, caching efficacy, and key locality. The observations presented here lead to several design insights and new research directions for key-value caches, such as the relative inadequacy of the least recently used (LRU) replacement policy.",
"title": ""
},
{
"docid": "f83f9bb497ffdb8e09211e6058bd4d87",
"text": "For monitoring the conditions of railway infrastructures, axle box acceleration (ABA) measurements on board of trains is used. In this paper, the focus is on the early detection of short surface defects called squats. Different classes of squats are classified based on the response in the frequency domain of the ABA signal, using the wavelet power spectrum. For the investigated Dutch tracks, the power spectrum in the frequencies between 1060-1160Hz and around 300Hz indicate existence of a squat and also provide information of whether a squat is light, moderate or severe. The detection procedure is then validated relying on real-life measurements of ABA signals from measuring trains, and data of severity and location of squats obtained via a visual inspection of the tracks. Based on the real-life tests in the Netherlands, the hit rate of the system for light squats is higher than 78%, with a false alarm rate of 15%. In the case of severe squats the hit rate was 100% and zero false alarms.",
"title": ""
},
{
"docid": "f8ac5a0dbd0bf8228b8304c1576189b9",
"text": "The importance of cost planning for solid waste management (SWM) in industrialising regions (IR) is not well recognised. The approaches used to estimate costs of SWM can broadly be classified into three categories - the unit cost method, benchmarking techniques and developing cost models using sub-approaches such as cost and production function analysis. These methods have been developed into computer programmes with varying functionality and utility. IR mostly use the unit cost and benchmarking approach to estimate their SWM costs. The models for cost estimation, on the other hand, are used at times in industrialised countries, but not in IR. Taken together, these approaches could be viewed as precedents that can be modified appropriately to suit waste management systems in IR. The main challenges (or problems) one might face while attempting to do so are a lack of cost data, and a lack of quality for what data do exist. There are practical benefits to planners in IR where solid waste problems are critical and budgets are limited.",
"title": ""
},
{
"docid": "2ca40fc7cf2cb7377b9b89be2606b096",
"text": "By “elementary” plane geometry I mean the geometry of lines and circles—straightedge and compass constructions—in both Euclidean and non-Euclidean planes. An axiomatic description of it is in Sections 1.1, 1.2, and 1.6. This survey highlights some foundational history and some interesting recent discoveries that deserve to be better known, such as the hierarchies of axiom systems, Aristotle’s axiom as a “missing link,” Bolyai’s discovery—proved and generalized by William Jagy—of the relationship of “circle-squaring” in a hyperbolic plane to Fermat primes, the undecidability, incompleteness, and consistency of elementary Euclidean geometry, and much more. A main theme is what Hilbert called “the purity of methods of proof,” exemplified in his and his early twentieth century successors’ works on foundations of geometry.",
"title": ""
},
{
"docid": "b3fce50260d7f77e8ca294db9c6666f6",
"text": "Nanotechnology is enabling the development of devices in a scale ranging from one to a few hundred nanometers. Coordination and information sharing among these nano-devices will lead towards the development of future nanonetworks, boosting the range of applications of nanotechnology in the biomédical, environmental and military fields. Despite the major progress in nano-device design and fabrication, it is still not clear how these atomically precise machines will communicate. Recently, the advancements in graphene-based electronics have opened the door to electromagnetic communications in the nano-scale. In this paper, a new quantum mechanical framework is used to analyze the properties of Carbon Nanotubes (CNTs) as nano-dipole antennas. For this, first the transmission line properties of CNTs are obtained using the tight-binding model as functions of the CNT length, diameter, and edge geometry. Then, relevant antenna parameters such as the fundamental resonant frequency and the input impedance are calculated and compared to those of a nano-patch antenna based on a Graphene Nanoribbon (GNR) with similar dimensions. The results show that for a maximum antenna size in the order of several hundred nanometers (the expected maximum size for a nano-device), both a nano-dipole and a nano-patch antenna will be able to radiate electromagnetic waves in the terahertz band (0.1–10.0 THz).",
"title": ""
},
{
"docid": "8cbc15b5e5c957f464573e52f00f2924",
"text": "Tennis is one of the most popular sports in the world. Many researchers have studied in tennis model to find out whose player will be the winner of the match by using the statistical data. This paper proposes a powerful technique to predict the winner of the tennis match. The proposed method provides more accurate prediction results by using the statistical data and environmental data based on Multi-Layer Perceptron (MLP) with back-propagation learning algorithm.",
"title": ""
},
{
"docid": "7620ed24b84b741be8800b1b52f54807",
"text": "JASVINDER A. SINGH, KENNETH G. SAAG, S. LOUIS BRIDGES JR., ELIE A. AKL, RAVEENDHARA R. BANNURU, MATTHEW C. SULLIVAN, ELIZAVETA VAYSBROT, CHRISTINE MCNAUGHTON, MIKALA OSANI, ROBERT H. SHMERLING, JEFFREY R. CURTIS, DANIEL E. FURST, DEBORAH PARKS, ARTHUR KAVANAUGH, JAMES O’DELL, CHARLES KING, AMYE LEONG, ERIC L. MATTESON, JOHN T. SCHOUSBOE, BARBARA DREVLOW, SETH GINSBERG, JAMES GROBER, E. WILLIAM ST.CLAIR, ELIZABETH TINDALL, AMY S. MILLER, AND TIMOTHY MCALINDON",
"title": ""
},
{
"docid": "4ddb0d4bf09dc9244ee51d4b843db5f2",
"text": "BACKGROUND\nMobile applications (apps) have potential for helping people increase their physical activity, but little is known about the behavior change techniques marketed in these apps.\n\n\nPURPOSE\nThe aim of this study was to characterize the behavior change techniques represented in online descriptions of top-ranked apps for physical activity.\n\n\nMETHODS\nTop-ranked apps (n=167) were identified on August 28, 2013, and coded using the Coventry, Aberdeen and London-Revised (CALO-RE) taxonomy of behavior change techniques during the following month. Analyses were conducted during 2013.\n\n\nRESULTS\nMost descriptions of apps incorporated fewer than four behavior change techniques. The most common techniques involved providing instruction on how to perform exercises, modeling how to perform exercises, providing feedback on performance, goal-setting for physical activity, and planning social support/change. A latent class analysis revealed the existence of two types of apps, educational and motivational, based on their configurations of behavior change techniques.\n\n\nCONCLUSIONS\nBehavior change techniques are not widely marketed in contemporary physical activity apps. Based on the available descriptions and functions of the observed techniques in contemporary health behavior theories, people may need multiple apps to initiate and maintain behavior change. This audit provides a starting point for scientists, developers, clinicians, and consumers to evaluate and enhance apps in this market.",
"title": ""
},
{
"docid": "8ca3fe42e8a59262f319b995309cbd60",
"text": "Deep neural networks (DNNs) are used by different applications that are executed on a range of computer architectures, from IoT devices to supercomputers. The footprint of these networks is huge as well as their computational and communication needs. In order to ease the pressure on resources, research indicates that in many cases a low precision representation (1-2 bit per parameter) of weights and other parameters can achieve similar accuracy while requiring less resources. Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e.g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers. This paper presents a new streaming architecture for running QNNs on FPGAs. The proposed architecture scales out better than alternatives, allowing us to take advantage of systems with multiple FPGAs. We also included support for skip connections, that are used in state-of-the art NNs, and shown that our architecture allows to add those connections almost for free. All this allowed us to implement an 18-layer ResNet for 224×224 images classification, achieving 57.5% top-1 accuracy. In addition, we implemented a full-sized quantized AlexNet. In contrast to previous works, we use 2-bit activations instead of 1-bit ones, which improves AlexNet's top-1 accuracy from 41.8% to 51.03% for the ImageNet classification. Both AlexNet and ResNet can handle 1000-class real-time classification on an FPGA. Our implementation of ResNet-18 consumes 5× less power and is 4× slower for ImageNet, when compared to the same NN on the latest Nvidia GPUs. Smaller NNs, that fit a single FPGA, are running faster then on GPUs on small (32×32) inputs, while consuming up to 20× less energy and power.",
"title": ""
},
{
"docid": "43975c43de57d889b038cdee8b35e786",
"text": "We present an algorithm for computing rigorous solutions to a large class of ordinary differential equations. The main algorithm is based on a partitioning process and the use of interval arithmetic with directed rounding. As an application, we prove that the Lorenz equations support a strange attractor, as conjectured by Edward Lorenz in 1963. This conjecture was recently listed by Steven Smale as one of several challenging problems for the twenty-first century. We also prove that the attractor is robust, i.e., it persists under small perturbations of the coefficients in the underlying differential equations. Furthermore, the flow of the equations admits a unique SRB measure, whose support coincides with the attractor. The proof is based on a combination of normal form theory and rigorous computations.",
"title": ""
},
{
"docid": "b44600830a6aacd0a1b7ec199cba5859",
"text": "Existing e-service quality scales mainly focus on goal-oriented e-shopping behavior excluding hedonic quality aspects. As a consequence, these scales do not fully cover all aspects of consumer's quality evaluation. In order to integrate both utilitarian and hedonic e-service quality elements, we apply a transaction process model to electronic service encounters. Based on this general framework capturing all stages of the electronic service delivery process, we develop a transaction process-based scale for measuring service quality (eTransQual). After conducting exploratory and confirmatory factor analysis, we identify five discriminant quality dimensions: functionality/design, enjoyment, process, reliability and responsiveness. All extracted dimensions of eTransQual show a significant positive impact on important outcome variables like perceived value and customer satisfaction. Moreover, enjoyment is a dominant factor in influencing both relationship duration and repurchase intention as major drivers of customer lifetime value. As a result, we present conceptual and empirical evidence for the need to integrate both utilitarian and hedonic e-service quality elements into one measurement scale. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a7765d68c277dbc712376a46a377d5d4",
"text": "The trend of currency rates can be predicted with supporting from supervised machine learning in the transaction systems such as support vector machine. Not only representing models in use of machine learning techniques in learning, the support vector machine (SVM) model also is implemented with actual FoRex transactions. This might help automatically to make the transaction decisions of Bid/Ask in Foreign Exchange Market by using Expert Advisor (Robotics). The experimental results show the advantages of use SVM compared to the transactions without use SVM ones.",
"title": ""
},
{
"docid": "1ab0308539bc6508b924316b39a963ca",
"text": "Daily wafer fabrication in semiconductor foundry depends on considerable metrology operations for tool-quality and process-quality assurance. The metrology operations required a lot of metrology tools, which increase FAB's investment. Also, these metrology operations will increase cycle time of wafer process. Metrology operations do not bring any value added to wafer but only quality assurance. This article provides a new method denoted virtual metrology (VM) to utilize sensor data collected from 300 mm FAB's tools to forecast quality data of wafers and tools. This proposed method designs key steps to establish a VM control model based on neural networks and to develop and deploy applications following SEMI EDA (equipment data acquisition) standards.",
"title": ""
},
{
"docid": "b03b34dc9708693f06ee4786c48ce9b5",
"text": "Mobile Cloud Computing (MCC) enables smartphones to offload compute-intensive codes and data to clouds or cloudlets for energy conservation. Thus, MCC liberates smartphones from battery shortage and embraces more versatile mobile applications. Most pioneering MCC research work requires a consistent network performance for offloading. However, such consistency is challenged by frequent mobile user movements and unstable network quality, thereby resulting in a suboptimal offloading decision. To embrace network inconsistency, we propose ENDA, a three-tier architecture that leverages user track prediction, realtime network performance and server loads to optimize offloading decisions. On cloud tier, we first design a greedy searching algorithm to predict user track using historical user traces stored in database servers. We then design a cloud-enabled Wi-Fi access point (AP) selection scheme to find the most energy efficient AP for smartphone offloading. We evaluate the performance of ENDA through simulations under a real-world scenario. The results demonstrate that ENDA can generate offloading decisions with optimized energy efficiency, desirable response time, and potential adaptability to a variety of scenarios. ENDA outperforms existing offloading techniques that do not consider user mobility and server workload balance management.",
"title": ""
}
] | scidocsrr |
3743db53598a7771508150db2f4a34a1 | Towards a Robust Solution to People Counting | [
{
"docid": "0a75a45141a7f870bba32bed890da782",
"text": "Surveillance systems for public security are going beyond the conventional CCTV. A new generation of systems relies on image processing and computer vision techniques, deliver more ready-to-use information, and provide assistance for early detection of unusual events. Crowd density is a useful source of information because unusual crowdedness is often related to unusual events. Previous works on crowd density estimation either ignore perspective distortion or perform the correction based on incorrect formulation. Also there is no investigation on whether the geometric correction derived for the ground plane can be applied to human objects standing upright to the plane. This paper derives the relation for geometric correction for the ground plane and proves formally that it can be directly applied to all the foreground pixels. We also propose a very efficient implementation because it is important for a real-time application. Finally a time-adaptive criterion for unusual crowdedness detection is described.",
"title": ""
},
{
"docid": "af752d0de962449acd9a22608bd7baba",
"text": "Ð R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.",
"title": ""
}
] | [
{
"docid": "34bd9a54a1aeaf82f7c4b27047cb2f49",
"text": "Choosing a good location when opening a new store is crucial for the future success of a business. Traditional methods include offline manual survey, which is very time consuming, and analytic models based on census data, which are unable to adapt to the dynamic market. The rapid increase of the availability of big data from various types of mobile devices, such as online query data and offline positioning data, provides us with the possibility to develop automatic and accurate data-driven prediction models for business store placement. In this paper, we propose a Demand Distribution Driven Store Placement (D3SP) framework for business store placement by mining search query data from Baidu Maps. D3SP first detects the spatial-temporal distributions of customer demands on different business services via query data from Baidu Maps, the largest online map search engine in China, and detects the gaps between demand and supply. Then we determine candidate locations via clustering such gaps. In the final stage, we solve the location optimization problem by predicting and ranking the number of customers. We not only deploy supervised regression models to predict the number of customers, but also learn to rank models to directly rank the locations. We evaluate our framework on various types of businesses in real-world cases, and the experiments results demonstrate the effectiveness of our methods. D3SP as the core function for store placement has already been implemented as a core component of our business analytics platform and could be potentially used by chain store merchants on Baidu Nuomi.",
"title": ""
},
{
"docid": "ea50fcb63d7eeb37a3acd47ce4a7a572",
"text": "Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer prevention and diagnosis. Traditional manual screening is time consuming, operator dependent, and error prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intraclass variations in polyp size, color, shape, and texture, and low interclass variations between polyps and hard mimics. In this paper, we propose a novel offline and online three-dimensional (3-D) deep learning integration framework by leveraging the 3-D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with the previous methods employing hand-crafted features or 2-D convolutional neural network, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.",
"title": ""
},
{
"docid": "4ad535f3b4f1afba4497a4026236424e",
"text": "We study the problem of noninvasively estimating Blood Pressure (BP) without using a cuff, which is attractive for continuous monitoring of BP over Body Area Networks. It has been shown that the Pulse Arrival Time (PAT) measured as the delay between the ECG peak and a point in the finger PPG waveform can be used to estimate systolic and diastolic BP. Our aim is to evaluate the performance of such a method using the available MIMIC database, while at the same time improve the performance of existing techniques. We propose an algorithm to estimate BP from a combination of PAT and heart rate, showing improvement over PAT alone. We also show how the method achieves recalibration using an RLS adaptive algorithm. Finally, we address the use case of ECG and PPG sensors wirelessly communicating to an aggregator and study the effect of skew and jitter on BP estimation.",
"title": ""
},
{
"docid": "766b726231f9d9540deb40183b49a655",
"text": "This paper presents a survey of georeferenced point clouds. Concentration is, on the one hand, put on features, which originate in the measurement process themselves, and features derived by processing the point cloud. On the other hand, approaches for the processing of georeferenced point clouds are reviewed. This includes the data structures, but also spatial processing concepts. We suggest a categorization of features into levels that reflect the amount of processing. Point clouds are found across many disciplines, which is reflected in the versatility of the literature suggesting specific features.",
"title": ""
},
{
"docid": "4be9ae4bc6fb01e78d550bedf199d0b0",
"text": "Protein timing is a popular dietary strategy designed to optimize the adaptive response to exercise. The strategy involves consuming protein in and around a training session in an effort to facilitate muscular repair and remodeling, and thereby enhance post-exercise strength- and hypertrophy-related adaptations. Despite the apparent biological plausibility of the strategy, however, the effectiveness of protein timing in chronic training studies has been decidedly mixed. The purpose of this paper therefore was to conduct a multi-level meta-regression of randomized controlled trials to determine whether protein timing is a viable strategy for enhancing post-exercise muscular adaptations. The strength analysis comprised 478 subjects and 96 ESs, nested within 41 treatment or control groups and 20 studies. The hypertrophy analysis comprised 525 subjects and 132 ESs, nested with 47 treatment or control groups and 23 studies. A simple pooled analysis of protein timing without controlling for covariates showed a small to moderate effect on muscle hypertrophy with no significant effect found on muscle strength. In the full meta-regression model controlling for all covariates, however, no significant differences were found between treatment and control for strength or hypertrophy. The reduced model was not significantly different from the full model for either strength or hypertrophy. With respect to hypertrophy, total protein intake was the strongest predictor of ES magnitude. These results refute the commonly held belief that the timing of protein intake in and around a training session is critical to muscular adaptations and indicate that consuming adequate protein in combination with resistance exercise is the key factor for maximizing muscle protein accretion.",
"title": ""
},
{
"docid": "7d82c8d8fae92b9ac2a3d63f74e0b973",
"text": "The security of sensitive data and the safety of control signal are two core issues in industrial control system (ICS). However, the prevalence of USB storage devices brings a great challenge on protecting ICS in those respects. Unfortunately, there is currently no solution especially for ICS to provide a complete defense against data transmission between untrusted USB storage devices and critical equipment without forbidding normal USB device function. This paper proposes a trust management scheme of USB storage devices for ICS (TMSUI). By fully considering the background of application scenarios, TMSUI is designed based on security chip to achieve authoring a certain USB storage device to only access some exact protected terminals in ICS for a particular period of time. The issues about digital forensics and revocation of authorization are discussed. The prototype system is finally implemented and the evaluation on it indicates that TMSUI effectively meets the security goals with high compatibility and good performance.",
"title": ""
},
{
"docid": "c063474634eb427cf0215b4500182f8c",
"text": "Factorization Machines offer good performance and useful embeddings of data. However, they are costly to scale to large amounts of data and large numbers of features. In this paper we describe DiFacto, which uses a refined Factorization Machine model with sparse memory adaptive constraints and frequency adaptive regularization. We show how to distribute DiFacto over multiple machines using the Parameter Server framework by computing distributed subgradients on minibatches asynchronously. We analyze its convergence and demonstrate its efficiency in computational advertising datasets with billions examples and features.",
"title": ""
},
{
"docid": "cfb06477edaa39f53b1b892cdfc1621a",
"text": "This paper presents ray casting as the methodological basis for a CAD/CAM solid modeling system. Solid objects are modeled by combining primitive solids, such as blocks and cylinders, using the set operators union, intersection, and difference. To visualize and analyze the composite solids modeled, virtual light rays are cast as probes. By virtue of its simplicity, ray casting is reliable and extensible. The most difficult mathematical problem is finding linesurface intersection points. So surfaces such as planes, quad&, tori, and probably even parametric surface patches may bound the primitive solids. The adequacy and efficiency of ray casting are issues addressed here. A fast picture generation capability for interactive modeling is the biggest challenge. New methods are presented, accompanied by sample pictures and CPU times, to meet the challenge.",
"title": ""
},
{
"docid": "3d332b3ae4487a7272ae1e2204965f98",
"text": "Robots are increasingly present in modern industry and also in everyday life. Their applications range from health-related situations, for assistance to elderly people or in surgical operations, to automatic and driver-less vehicles (on wheels or flying) or for driving assistance. Recently, an interest towards robotics applied in agriculture and gardening has arisen, with applications to automatic seeding and cropping or to plant disease control, etc. Autonomous lawn mowers are succesful market applications of gardening robotics. In this paper, we present a novel robot that is developed within the TrimBot2020 project, funded by the EU H2020 program. The project aims at prototyping the first outdoor robot for automatic bush trimming and rose pruning.",
"title": ""
},
{
"docid": "eafe4aa1aada03bad956d8bed16546dd",
"text": "The increasing prevalence of male-to-female (MtF) transsexualism in Western countries is largely due to the growing number of MtF transsexuals who have a history of sexual arousal with cross-dressing or cross-gender fantasy. Ray Blanchard proposed that these transsexuals have a paraphilia he called autogynephilia, which is the propensity to be sexually aroused by the thought or image of oneself as female. Autogynephilia defines a transsexual typology and provides a theory of transsexual motivation, in that Blanchard proposed that MtF transsexuals are either sexually attracted exclusively to men (homosexual) or are sexually attracted primarily to the thought or image of themselves as female (autogynephilic), and that autogynephilic transsexuals seek sex reassignment to actualize their autogynephilic desires. Despite growing professional acceptance, Blanchard's formulation is rejected by some MtF transsexuals as inconsistent with their experience. This rejection, I argue, results largely from the misconception that autogynephilia is a purely erotic phenomenon. Autogynephilia can more accurately be conceptualized as a type of sexual orientation and as a variety of romantic love, involving both erotic and affectional or attachment-based elements. This broader conception of autogynephilia addresses many of the objections to Blanchard's theory and is consistent with a variety of clinical observations concerning autogynephilic MtF transsexualism.",
"title": ""
},
{
"docid": "5906d20bea1c95399395d045f84f11c9",
"text": "Constructive interference (CI) enables concurrent transmissions to interfere non-destructively, so as to enhance network concurrency. In this paper, we propose deliberate synchronized constructive interference (Disco), which ensures concurrent transmissions of an identical packet to synchronize more precisely than traditional CI. Disco envisions concurrent transmissions to positively interfere at the receiver, and potentially allows orders of magnitude reductions in energy consumption and improvements in link quality. We also theoretically introduce a sufficient condition to construct Disco with IEEE 802.15.4 radio for the first time. Moreover, we propose Triggercast, a distributed middleware service, and show it is feasible to generate Disco on real sensor network platforms like TMote Sky. To synchronize transmissions of multiple senders at the chip level, Triggercast effectively compensates propagation and radio processing delays, and has 95th percentile synchronization errors of at most 250 ns. Triggercast also intelligently decides which co-senders to participate in simultaneous transmissions, and aligns their transmission time to maximize the overall link Packet Reception Ratio (PRR), under the condition of maximal system robustness. Extensive experiments in real testbeds demonstrate that Triggercast significantly improves PRR from 5 to 70 percent with seven concurrent senders. We also demonstrate that Triggercast provides 1.3χ PRR performance gains in average, when it is integrated with existing data forwarding protocols.",
"title": ""
},
{
"docid": "7a6d32d50e3b1be70889fc85ffdcac45",
"text": "Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.",
"title": ""
},
{
"docid": "2c5f0763b6c4888babc04af50bb89aaf",
"text": "A 1.8-V 14-b 12-MS/s pseudo-differential pipeline analog-to-digital converter (ADC) using a passive capacitor error-averaging technique and a nested CMOS gain-boosting technique is described. The converter is optimized for low-voltage low-power applications by applying an optimum stage-scaling algorithm at the architectural level and an opamp and comparator sharing technique at the circuit level. Prototyped in a 0.18-/spl mu/m 6M-1P CMOS process, this converter achieves a peak signal-to-noise plus distortion ratio (SNDR) of 75.5 dB and a 103-dB spurious-free dynamic range (SFDR) without trimming, calibration, or dithering. With a 1-MHz analog input, the maximum differential nonlinearity is 0.47 LSB and the maximum integral nonlinearity is 0.54 LSB. The large analog bandwidth of the front-end sample-and-hold circuit is achieved using bootstrapped thin-oxide transistors as switches, resulting in an SFDR of 97 dB when a 40-MHz full-scale input is digitized. The ADC occupies an active area of 10 mm/sup 2/ and dissipates 98 mW.",
"title": ""
},
{
"docid": "79465d290ab299b9d75e9fa617d30513",
"text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.",
"title": ""
},
{
"docid": "047c36e2650b8abde75cccaeb0368c88",
"text": "Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 ± 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.",
"title": ""
},
{
"docid": "3621dd85dc4ba3007cfa8ec1017b4e96",
"text": "The current lack of knowledge about the effect of maternally administered drugs on the developing fetus is a major public health concern worldwide. The first critical step toward predicting the safety of medications in pregnancy is to screen drug compounds for their ability to cross the placenta. However, this type of preclinical study has been hampered by the limited capacity of existing in vitro and ex vivo models to mimic physiological drug transport across the maternal-fetal interface in the human placenta. Here the proof-of-principle for utilizing a microengineered model of the human placental barrier to simulate and investigate drug transfer from the maternal to the fetal circulation is demonstrated. Using the gestational diabetes drug glyburide as a model compound, it is shown that the microphysiological system is capable of reconstituting efflux transporter-mediated active transport function of the human placental barrier to limit fetal exposure to maternally administered drugs. The data provide evidence that the placenta-on-a-chip may serve as a new screening platform to enable more accurate prediction of drug transport in the human placenta.",
"title": ""
},
{
"docid": "29cbdeb95a221820a6425e1249763078",
"text": "The concept of “Industry 4.0” that covers the topics of Internet of Things, cyber-physical system, and smart manufacturing, is a result of increasing demand of mass customized manufacturing. In this paper, a smart manufacturing framework of Industry 4.0 is presented. In the proposed framework, the shop-floor entities (machines, conveyers, etc.), the smart products and the cloud can communicate and negotiate interactively through networks. The shop-floor entities can be considered as agents based on the theory of multi-agent system. These agents implement dynamic reconfiguration in a collaborative manner to achieve agility and flexibility. However, without global coordination, problems such as load-unbalance and inefficiency may occur due to different abilities and performances of agents. Therefore, the intelligent evaluation and control algorithms are proposed to reduce the load-unbalance with the assistance of big data feedback. The experimental results indicate that the presented algorithms can easily be deployed in smart manufacturing system and can improve both load-balance and efficiency.",
"title": ""
},
{
"docid": "5744f6f5d6b2f0f5f150ec939d1f8c74",
"text": "We introduce a novel active learning framework for video annotation. By judiciously choosing which frames a user should annotate, we can obtain highly accurate tracks with minimal user effort. We cast this problem as one of active learning, and show that we can obtain excellent performance by querying frames that, if annotated, would produce a large expected change in the estimated object track. We implement a constrained tracker and compute the expected change for putative annotations with efficient dynamic programming algorithms. We demonstrate our framework on four datasets, including two benchmark datasets constructed with key frame annotations obtained by Amazon Mechanical Turk. Our results indicate that we could obtain equivalent labels for a small fraction of the original cost.",
"title": ""
},
{
"docid": "db8d146ad8e62fd7a558703ef20a6330",
"text": "In this paper, we focus on the problem of completion of multidimensional arrays (also referred to as tensors), in particular three-dimensional (3-D) arrays, from limited sampling. Our approach is based on a recently proposed tensor algebraic framework where 3-D tensors are treated as linear operators over the set of 2-D tensors. In this framework, one can obtain a factorization for 3-D data, referred to as the tensor singular value decomposition (t-SVD), which is similar to the SVD for matrices. t-SVD results in a notion of rank referred to as the tubal-rank. Using this approach we consider the problem of sampling and recovery of 3-D arrays with low tubal-rank. We show that by solving a convex optimization problem, which minimizes a convex surrogate to the tubal-rank, one can guarantee exact recovery with high probability as long as number of samples is of the order <inline-formula><tex-math notation=\"LaTeX\">$O(rnk \\log (nk))$ </tex-math></inline-formula>, given a tensor of size <inline-formula><tex-math notation=\"LaTeX\">$n\\times n\\times k$ </tex-math></inline-formula> with tubal-rank <inline-formula><tex-math notation=\"LaTeX\">$r$</tex-math></inline-formula> . The conditions under which this result holds are similar to the incoherence conditions for low-rank matrix completion under random sampling. The difference is that we define incoherence under the algebraic setup of t-SVD, which is different from the standard matrix incoherence conditions. We also compare the numerical performance of the proposed algorithm with some state-of-the-art approaches on real-world datasets.",
"title": ""
},
{
"docid": "e5cd0bdffd94215aa19a5fc29a1b6753",
"text": "Anhedonia is a core symptom of major depressive disorder (MDD), long thought to be associated with reduced dopaminergic function. However, most antidepressants do not act directly on the dopamine system and all antidepressants have a delayed full therapeutic effect. Recently, it has been proposed that antidepressants fail to alter dopamine function in antidepressant unresponsive MDD. There is compelling evidence that dopamine neurons code a specific phasic (short duration) reward-learning signal, described by temporal difference (TD) theory. There is no current evidence for other neurons coding a TD reward-learning signal, although such evidence may be found in time. The neuronal substrates of the TD signal were not explored in this study. Phasic signals are believed to have quite different properties to tonic (long duration) signals. No studies have investigated phasic reward-learning signals in MDD. Therefore, adults with MDD receiving long-term antidepressant medication, and comparison controls both unmedicated and acutely medicated with the antidepressant citalopram, were scanned using fMRI during a reward-learning task. Three hypotheses were tested: first, patients with MDD have blunted TD reward-learning signals; second, controls given an antidepressant acutely have blunted TD reward-learning signals; third, the extent of alteration in TD signals in major depression correlates with illness severity ratings. The results supported the hypotheses. Patients with MDD had significantly reduced reward-learning signals in many non-brainstem regions: ventral striatum (VS), rostral and dorsal anterior cingulate, retrosplenial cortex (RC), midbrain and hippocampus. However, the TD signal was increased in the brainstem of patients. As predicted, acute antidepressant administration to controls was associated with a blunted TD signal, and the brainstem TD signal was not increased by acute citalopram administration. In a number of regions, the magnitude of the abnormal signals in MDD correlated with illness severity ratings. The findings highlight the importance of phasic reward-learning signals, and are consistent with the hypothesis that antidepressants fail to normalize reward-learning function in antidepressant-unresponsive MDD. Whilst there is evidence that some antidepressants acutely suppress dopamine function, the long-term action of virtually all antidepressants is enhanced dopamine agonist responsiveness. This distinction might help to elucidate the delayed action of antidepressants. Finally, analogous to recent work in schizophrenia, the finding of abnormal phasic reward-learning signals in MDD implies that an integrated understanding of symptoms and treatment mechanisms is possible, spanning physiology, phenomenology and pharmacology.",
"title": ""
}
] | scidocsrr |
20ad2224c79a3c0d4f1b2fc6e65c2ff9 | A Review on Entity Relation Extraction | [
{
"docid": "741078742178d09f911ef9633befeb9b",
"text": "We introduce a novel kernel for comparing two text documents. The kernel is an inner product in the feature space consisting of all subsequences of length k. A subsequence is any ordered sequence of k characters occurring in the text though not necessarily contiguously. The subsequences are weighted by an exponentially decaying factor of their full length in the text, hence emphasising those occurrences which are close to contiguous. A direct computation of this feature vector would involve a prohibitive amount of computation even for modest values of k, since the dimension of the feature space grows exponentially with k. The paper describes how despite this fact the inner product can be efficiently evaluated by a dynamic programming technique. A preliminary experimental comparison of the performance of the kernel compared with a standard word feature space kernel [4] is made showing encouraging results.",
"title": ""
}
] | [
{
"docid": "fb123464a674e27f3dd36b109ad531e6",
"text": "Buerger exercise can improve the peripheral circulation of lower extremities. However, the evidence and a quantitative assessment of skin perfusion immediately after this exercise in patients with diabetes feet are still rare.We recruited 30 patients with unilateral or bilateral diabetic ulcerated feet in Chang Gung Memorial Hospital, Chia-Yi Branch, from October 2012 to December 2013. Real-time dorsal foot skin perfusion pressures (SPPs) before and after Buerger exercise were measured and analyzed. In addition, the severity of ischemia and the presence of ulcers before exercise were also stratified.A total of 30 patients with a mean age of 63.4 ± 13.7 years old were enrolled in this study. Their mean duration of diabetes was 13.6 ± 8.2 years. Among them, 26 patients had unilateral and 4 patients had bilateral diabetes foot ulcers. Of the 34 wounded feet, 23 (68%) and 9 (27%) feet were classified as Wagner class II and III, respectively. The real-time SPP measurement indicated that Buerger exercise significantly increased the level of SPP by more than 10 mm Hg (n = 46, 58.3 vs 70.0 mm Hg, P < 0.001). In terms of pre-exercise dorsal foot circulation condition, the results showed that Buerger exercise increased the level of SPP in severe ischemia (n = 5, 22.1 vs 37.3 mm Hg, P = 0.043), moderate ischemia (n = 14, 42.2 vs 64.4 mm Hg, P = 0.001), and borderline-normal (n = 7, 52.9 vs 65.4 mm Hg, P = 0.028) groups, respectively. However, the 20 feet with SPP levels more than 60 mm Hg were not improved significantly after exercise (n = 20, 58.3 vs 71.5 mm Hg, P = 0.239). As to the presence of ulcers, Buerger exercise increased the level of SPP in either unwounded feet (n = 12, 58.5 vs 66.0 mm Hg, P = 0.012) or wounded feet (n = 34, 58.3 vs 71.5 mm Hg, P < 0.001). The majority of the ulcers was either completely healed (9/34 = 27%) or still improving (14/34 = 41%).This study quantitatively demonstrates the evidence of dorsal foot peripheral circulation improvement after Buerger exercise in patients with diabetes.",
"title": ""
},
{
"docid": "b58261f83abd61d2cc2a7c25df12389f",
"text": "BACKGROUND\nSeasonal affective disorder (SAD) is a seasonal pattern of recurrent depressive episodes that is often treated with second-generation antidepressants (SGAs), light therapy or psychotherapy.\n\n\nOBJECTIVES\nTo assess the efficacy and safety of SGAs for the treatment of SAD in adults in comparison with placebo, light therapy, other SGAs or psychotherapy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Depression, Anxiety and Neuorosis Review Group's specialised register (CCDANCTR) on the 26 August 2011. The CCDANCTR contains reports of relevant randomised controlled trials from The Cochrane Library (all years), EMBASE (1974 to date), MEDLINE (1950 to date) and PsycINFO (1967 to date). In addition, we searched pharmaceutical industry trials registers via the Internet to identify unpublished trial data. Furthermore, we searched OVID MEDLINE, MEDLINE In-process, EMBASE and PsycINFO to 27July 2011 for publications on adverse effects (including non-randomised studies).\n\n\nSELECTION CRITERIA\nFor efficacy we included randomised trials of SGAs compared with other SGAs, placebo, light therapy or psychotherapy in adult participants with SAD. For adverse effects we also included non-randomised studies.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors screened abstracts and full-text publications against the inclusion criteria. Data abstraction and risk of bias assessment were conducted by one reviewer and checked for accuracy and completeness by a second. We pooled data for meta-analysis where the participant groups were similar and the studies assessed the same treatments with the same comparator and had similar definitions of outcome measures over a similar duration of treatment.\n\n\nMAIN RESULTS\nFor efficacy we included three randomised trials of between five and eight weeks duration with a total of 204 participants. For adverse effects we included two randomised trials and three observational (non-randomised) studies of five to eight weeks duration with a total of 225 participants. Overall, the randomised trials had low-to-moderate risk of bias, and the observational studies had a high risk of bias (due to small size and high attrition). The participants in the studies all met DSM (Diagnostic and Statistics Manual of Mental Disorders) criteria for SAD. The average age was approximately 40 years and 70% of the participants were female.Results from one trial with 68 participants showed that fluoxetine was not significantly more effective than placebo in achieving clinical response (risk ratio (RR) 1.62, 95% confidence interval (CI) 0.92 to 2.83). The number of adverse effects were similar between the two groups.We located two trials that contained a total of 136 participants for the comparison fluoxetine versus light therapy. Our meta-analysis of the results of the two trials showed fluoxetine and light therapy to be approximately equal in treating seasonal depression: RR of response 0.98 (95% CI 0.77 to 1.24), RR of remission 0.81 (95% CI 0.39 to 1.71). The number of adverse effects was similar in both groups.Two of the three randomised trials and three non-randomised studies contained adverse effect data on 225 participants who received fluoxetine, escitalopram, duloxetine, reboxetine, light therapy or placebo. We were only able to obtain crude rates of adverse effects, so any interpretation of this needs to be undertaken with caution. Between 22% and 100% of participants who received a SGA suffered an adverse effect and between 15% and 27% of participants withdrew from the studies because of adverse effects.\n\n\nAUTHORS' CONCLUSIONS\nEvidence for the effectiveness of SGAs is limited to one small trial of fluoxetine compared with placebo, which shows a non-significant effect in favour of fluoxetine, and two small trials comparing fluoxetine against light therapy, which suggest equivalence between the two interventions. The lack of available evidence precludes the ability to draw any overall conclusions on the use of SGAs for SAD. Further larger RCTs are required to expand and strengthen the evidence base on this topic, and should also include comparisons with psychotherapy and other SGAs.Data on adverse events were sparse, and a comparative analysis was not possible. Therefore the data we obtained on adverse effects is not robust and our confidence in the data is limited. Overall, up to 27% of participants treated with SGAs for SAD withdrew from the studies early due to adverse effects. The overall quality of evidence in this review is very low.",
"title": ""
},
{
"docid": "4a536c1186a1d1d1717ec1e0186b262c",
"text": "In this paper, I outline a perspective on organizational transformation which proposes change as endemic to the practice of organizing and hence as enacted through the situated practices of organizational actors as they improvise, innovate, and adjust their work routines over time. I ground this perspective in an empirical study which examined the use of a new information technology within one organization over a two year period. In this organization, a series of subtle but nonetheless significant changes were enacted over time as organizational actors appropriated the new technology into their work practices, and then experimented with local innovations, responded to unanticipated breakdowns and contingencies, initiated opportunistic shifts in structure and coordination mechanisms, and improvised various procedural, cognitive, and normative variations to accommodate their evolving use of the technology. These findings provide the empirical basis for a practice-based perspective on organizational transformation. Because it is grounded in the micro-level changes that actors enact over time as they make sense of and act in the world, a practice lens can avoid the strong assumptions of rationality, determinism, or discontinuity characterizing existing change perspectives. A situated change perspective may offer a particularly useful strategy for analyzing change in organizations turning increasingly away from patterns of stability, bureaucracy, and control to those of flexibility, selforganizing, and learning.",
"title": ""
},
{
"docid": "3ce8a1233ef1410f0b46d449a888624e",
"text": "We contribute three complementary mechanisms that increase the security, efficiency, and transparency of blockchain systems. We evaluate the use of status report messages that, like canaries in a coal mine, allow peers to detect both malicious miners and eclipse attacks almost immediately. We outline a mechanism, using these reports, that allow blockchain users to quantify the risk of a double-spend attack within minutes, versus the several hours required by the current system. We also devise a novel method of interactive set reconciliation for efficient status reports and blocks. Our approach, called Graphene, couples a Bloom filter with an IBLT, and reduces traffic overhead by about 45%. As an alternative for Bitcoin’s inefficient and opaque peer-to-peer (p2p) architecture, we also introduce Canary that separates the network’s data and control planes. Peers submit transactions directly to miners, who announce new blocks and transactions via distribution networks whose topology they manage. We show that Canary’s tree-based topology reduces traffic overhead by about 30% compared to the current architecture. When Graphene is coupled with Canary, Bitcoin’s traffic overhead is reduced by about 80%, while detecting eclipse attacks and increasing transparency.",
"title": ""
},
{
"docid": "ee2bdcfbdaeb16d0ff429c9ca3c0e9e8",
"text": "Hybrid zero dynamics (HZD) has emerged as a popular framework for dynamic walking but has significant implementation difficulties when applied to the high degrees of freedom humanoids. The primary impediment is the process of gait design—it is difficult for optimizers to converge on a viable set of virtual constraints defining a gait. This paper presents a methodology that allows for fast and reliable generation of dynamic robotic walking gaits through the HZD framework, even in the presence of underactuation. Specifically, we describe an optimization formulation that builds upon the novel combination of HZD and direct collocation methods. Furthermore, achieving a scalable implementation required developing a defect-variable substitution formulation to simplify expressions, which ultimately allows us to generate compact analytic Jacobians of the constraints. We experimentally validate our methodology on an underactuated humanoid, DURUS, a spring-legged machine designed to facilitate energy-economical walking. We show that the optimization approach, in concert with the HZD framework, yields dynamic and stable walking gaits in hardware with a total electrical cost of transport of 1.33.",
"title": ""
},
{
"docid": "79c35abdd2a3a37782dd63ea6df6e95e",
"text": "Heart disease is one of the main sources of demise around the world and it is imperative to predict the disease at a premature phase. The computer aided systems help the doctor as a tool for predicting and diagnosing heart disease. The objective of this review is to widespread about Heart related cardiovascular disease and to brief about existing decision support systems for the prediction and diagnosis of heart disease supported by data mining and hybrid intelligent techniques .",
"title": ""
},
{
"docid": "f262c85e241e0c6dd6eb472841284345",
"text": "BACKGROUND\nWe evaluated the feasibility and tolerability of triple- versus double-drug chemotherapy in elderly patients with oesophagogastric cancer.\n\n\nMETHODS\nPatients aged 65 years or older with locally advanced or metastatic oesophagogastric cancer were stratified and randomised to infusional 5-FU, leucovorin and oxaliplatin without (FLO) or with docetaxel 50 mg/m(2) (FLOT) every 2 weeks. The study is registered at ClinicalTrials.gov, identifier NCT00737373.\n\n\nFINDINGS\nOne hundred and forty three (FLO, 71; FLOT, 72) patients with a median age of 70 years were enrolled. The triple combination was associated with more treatment-related National Cancer Institute Common Toxicity Criteria (NCI-CTC) grade 3/4 adverse events (FLOT, 81.9%; FLO, 38.6%; P<.001) and more patients experiencing a ≥10-points deterioration of European Organization for Research and Treatment of Cancer Quality of Life (EORTC QoL) global health status scores (FLOT, 47.5%; FLO 20.5%; p=.011). The triple combination was associated with more alopecia (P<.001), neutropenia (P<.001), leukopenia (P<.001), diarrhoea (P=.006) and nausea (P=.029).). No differences were observed in treatment duration and discontinuation due to toxicity, cumulative doses or toxic deaths between arms. The triple combination improved response rates and progression-free survival in the locally advanced subgroup and in the subgroup of patients aged between 65 and 70 years but not in the metastatic group or in patients aged 70 years and older.\n\n\nINTERPRETATION\nThe triple-drug chemotherapy was feasible in elderly patients with oesophagogastric cancer. However, toxicity was significantly increased and QoL deteriorated in a relevant proportion of patients.\n\n\nFUNDING\nThe study was partially funded by Sanofi-Aventis.",
"title": ""
},
{
"docid": "fdab4af34adebd0d682134f3cf13d794",
"text": "Threat evaluation (TE) is a process used to assess the threat values (TVs) of air-breathing threats (ABTs), such as air fighters, that are approaching defended assets (DAs). This study proposes an automatic method for conducting TE using radar information when ABTs infiltrate into territory where DAs are located. The method consists of target asset (TA) prediction and TE. We divide a friendly territory into discrete cells based on the effective range of anti-aircraft missiles. The TA prediction identifies the TA of each ABT by predicting the ABT’s movement through cells in the territory via a Markov chain, and the cell transition is modeled by neural networks. We calculate the TVs of the ABTs based on the TA prediction results. A simulation-based experiment revealed that the proposed method outperformed TE based on the closest point of approach or the radial speed vector methods. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9fd3321922a73539210cb5b73d8d5d9c",
"text": "This paper presents a new model for controlling information flow in systems with mutual distrust and decentralized authority. The model allows users to share information with distrusted code (e.g., downloaded applets), yet still control how that code disseminates the shared information to others. The model improves on existing multilevel security models by allowing users to declassify information in a decentralized way, and by improving support for fine-grained data sharing. The paper also shows how static program analysis can be used to certify proper information flows in this model and to avoid most run-time information flow checks.",
"title": ""
},
{
"docid": "25634ffc58b358944d4428139764b36e",
"text": "Here we present the results of a wafer-level approach allowing the collective fabrication of gyroscope sensors based on quartz vibrating MEMS. More specifically, we focus on suspended quartz tuning fork microstructures of a desired thickness over controlled depth cavities. This approach is based on the bonding and thinning of 4-inch z-cut quartz wafer on pre-structured silicon wafer. InfraRed (IR) inspection shows a large bonded area (>98%) while structural characterizations of the thinned quartz layer indicate a crystal quality and mechanical properties equivalent to quartz bulk material. The obtained gyroscope exhibits a quality factor (Q) around 12300 at 86.6 kHz very close to quartz theoretical thermoelastic limit. This Quartz-On-Silicon (QOS) technology open the way to a new generation of highly integrated quartz devices.",
"title": ""
},
{
"docid": "e68c73806392d10c3c3fd262f6105924",
"text": "Dynamic programming (DP) is a powerful paradigm for general, nonlinear optimal control. Computing exact DP solutions is in general only possible when the process states and the control actions take values in a small discrete set. In practice, it is necessary to approximate the solutions. Therefore, we propose an algorithm for approximate DP that relies on a fuzzy partition of the state space, and on a discretization of the action space. This fuzzy Q-iteration algorithmworks for deterministic processes, under the discounted return criterion. We prove that fuzzy Q -iteration asymptotically converges to a solution that lies within a bound of the optimal solution. A bound on the suboptimality of the solution obtained in a finite number of iterations is also derived. Under continuity assumptions on the dynamics and on the reward function, we show that fuzzyQ -iteration is consistent, i.e., that it asymptotically obtains the optimal solution as the approximation accuracy increases. These properties hold both when the parameters of the approximator are updated in a synchronous fashion, and when they are updated asynchronously. The asynchronous algorithm is proven to converge at least as fast as the synchronous one. The performance of fuzzy Q iteration is illustrated in a two-link manipulator control problem. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "26884c49c5ada3fc80dbc2f2d1e5660b",
"text": "We introduce a complete pipeline for recognizing and classifying people’s clothing in natural scenes. This has several interesting applications, including e-commerce, event and activity recognition, online advertising, etc. The stages of the pipeline combine a number of state-of-the-art building blocks such as upper body detectors, various feature channels and visual attributes. The core of our method consists of a multi-class learner based on a Random Forest that uses strong discriminative learners as decision nodes. To make the pipeline as automatic as possible we also integrate automatically crawled training data from the web in the learning process. Typically, multi-class learning benefits from more labeled data. Because the crawled data may be noisy and contain images unrelated to our task, we extend Random Forests to be capable of transfer learning from different domains. For evaluation, we define 15 clothing classes and introduce a benchmark data set for the clothing classification task consisting of over 80, 000 images, which we make publicly available. We report experimental results, where our classifier outperforms an SVM baseline with 41.38 % vs 35.07 % average accuracy on challenging benchmark data.",
"title": ""
},
{
"docid": "f3db1251df92011ec0e8e309dd119e43",
"text": "Sentiment expression in microblog posts can be affected by user’s personal character, opinion bias, political stance and so on. Most of existing personalized microblog sentiment classification methods suffer from the insufficiency of discriminative tweets for personalization learning. We observed that microblog users have consistent individuality and opinion bias in different languages. Based on this observation, in this paper we propose a novel user-attention-based Convolutional Neural Network (CNN) model with adversarial cross-lingual learning framework. The user attention mechanism is leveraged in CNN model to capture user’s languagespecific individuality from the posts. Then the attention-based CNN model is incorporated into a novel adversarial cross-lingual learning framework, in which with the help of user properties as bridge between languages, we can extract the language-specific features and language-independent features to enrich the user post representation so as to alleviate the data insufficiency problem. Results on English and Chinese microblog datasets confirm that our method outperforms state-of-the-art baseline algorithms with large margins.",
"title": ""
},
{
"docid": "1241bc6b7d3522fe9e285ae843976524",
"text": "In many new high performance designs, the leakage component of power consumption is comparable to the switching component. Reports indicate that 40% or even higher percentage of the total power consumption is due to the leakage of transistors. This percentage will increase with technology scaling unless effective techniques are introduced to bring leakage under control. This article focuses on circuit optimization and design automation techniques to accomplish this goal. The first part of the article provides an overview of basic physics and process scaling trends that have resulted in a significant increase in the leakage currents in CMOS circuits. This part also distinguishes between the standby and active components of the leakage current. The second part of the article describes a number of circuit optimization techniques for controlling the standby leakage current, including power gating and body bias control. The third part of the article presents techniques for active leakage control, including use of multiple-threshold cells, long channel devices, input vector design, transistor stacking to switching noise, and sizing with simultaneous threshold and supply voltage assignment.",
"title": ""
},
{
"docid": "dc2d5f9bfe41246ae9883aa6c0537c40",
"text": "Phosphatidylinositol 3-kinases (PI3Ks) are crucial coordinators of intracellular signalling in response to extracellular stimuli. Hyperactivation of PI3K signalling cascades is one of the most common events in human cancers. In this Review, we discuss recent advances in our knowledge of the roles of specific PI3K isoforms in normal and oncogenic signalling, the different ways in which PI3K can be upregulated, and the current state and future potential of targeting this pathway in the clinic.",
"title": ""
},
{
"docid": "6a5e0e30eb5b7f2efe76e0e58e04ae4a",
"text": "We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call “percepts” using Gated-Recurrent-Unit Recurrent Networks (GRUs). Our method relies on percepts that are extracted from all levels of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts, however, can lead to high-dimensionality video representations. To mitigate this effect and control the number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler caption-decoder model and without extra 3D CNN features.",
"title": ""
},
{
"docid": "af359933fad5d689718e2464d9c4966c",
"text": "Distant supervision can effectively label data for relation extraction, but suffers from the noise labeling problem. Recent works mainly perform soft bag-level noise reduction strategies to find the relatively better samples in a sentence bag, which is suboptimal compared with making a hard decision of false positive samples in sentence level. In this paper, we introduce an adversarial learning framework, which we named DSGAN, to learn a sentencelevel true-positive generator. Inspired by Generative Adversarial Networks, we regard the positive samples generated by the generator as the negative samples to train the discriminator. The optimal generator is obtained until the discrimination ability of the discriminator has the greatest decline. We adopt the generator to filter distant supervision training dataset and redistribute the false positive instances into the negative set, in which way to provide a cleaned dataset for relation classification. The experimental results show that the proposed strategy significantly improves the performance of distant supervision relation extraction comparing to state-of-the-art systems.",
"title": ""
},
{
"docid": "6aed3ffa374139fa9c4e0b7c1afb7841",
"text": "Recent longitudinal and cross-sectional aging research has shown that personality traits continue to change in adulthood. In this article, we review the evidence for mean-level change in personality traits, as well as for individual differences in change across the life span. In terms of mean-level change, people show increased selfconfidence, warmth, self-control, and emotional stability with age. These changes predominate in young adulthood (age 20-40). Moreover, mean-level change in personality traits occurs in middle and old age, showing that personality traits can change at any age. In terms of individual differences in personality change, people demonstrate unique patterns of development at all stages of the life course, and these patterns appear to be the result of specific life experiences that pertain to a person's stage of life.",
"title": ""
},
{
"docid": "395ccc8d0f91f221311e7eb72379989e",
"text": "Accurate junction detection and characterization are of primary importance for several aspects of scene analysis, including depth recovery and motion analysis. In this work, we introduce a generic junction analysis scheme. The first asset of the proposed procedure is an automatic criterion for the detection of junctions, permitting to deal with textured parts in which no detection is expected. Second, the method yields a characterization of L-, Y- and X- junctions, including a precise computation of their type, localization and scale. Contrary to classical approaches, scale characterization does not rely on the linear scale-space. First, an a contrario approach is used to compute the meaningfulness of a junction. This approach relies on a statistical modeling of suitably normalized gray level gradients. Then, exclusion principles between junctions permit their precise characterization. We give implementation details for this procedure and evaluate its efficiency through various experiments.",
"title": ""
},
{
"docid": "4b7714c60749a2f945f21ca3d6d367fe",
"text": "Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.ive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.",
"title": ""
}
] | scidocsrr |
bb3b89dd1acf40f12a44eab4bf91d616 | Big data and digital forensics | [
{
"docid": "dc8ffc5fd84b3af4cc88d75f7bc88f77",
"text": "Digital crimes is big problem due to large numbers of data access and insufficient attack analysis techniques so there is the need for improvements in existing digital forensics techniques. With growing size of storage capacity these digital forensic investigations are getting more difficult. Visualization allows for displaying large amounts of data at once. Integrated visualization of data distribution bars and rules, visualization of behaviour and comprehensive analysis, maps allow user to analyze different rules and data at different level, with any kind of anomaly in data. Data mining techniques helps to improve the process of visualization. These papers give comprehensive review on various visualization techniques with various anomaly detection techniques.",
"title": ""
}
] | [
{
"docid": "5931cb779b24065c5ef48451bc46fac4",
"text": "In order to provide a material that can facilitate the modeling and construction of a Furuta pendulum, this paper presents the deduction, step-by-step, of a Furuta pendulum mathematical model by using the Lagrange equations of motion. Later, a mechanical design of the Furuta pendulum is carried out via the software Solid Works and subsequently a prototype is built. Numerical simulations of the Furuta pendulum model are performed via Mat lab-Simulink. Furthermore, the Furuta pendulum prototype built is experimentally tested by using Mat lab-Simulink, Control Desk, and a DS1104 board from dSPACE.",
"title": ""
},
{
"docid": "938afbc53340a3aa6e454d17789bf021",
"text": "BACKGROUND\nAll cultural groups in the world place paramount value on interpersonal trust. Existing research suggests that although accurate judgments of another's trustworthiness require extensive interactions with the person, we often make trustworthiness judgments based on facial cues on the first encounter. However, little is known about what facial cues are used for such judgments and what the bases are on which individuals make their trustworthiness judgments.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nIn the present study, we tested the hypothesis that individuals may use facial attractiveness cues as a \"shortcut\" for judging another's trustworthiness due to the lack of other more informative and in-depth information about trustworthiness. Using data-driven statistical models of 3D Caucasian faces, we compared facial cues used for judging the trustworthiness of Caucasian faces by Caucasian participants who were highly experienced with Caucasian faces, and the facial cues used by Chinese participants who were unfamiliar with Caucasian faces. We found that Chinese and Caucasian participants used similar facial cues to judge trustworthiness. Also, both Chinese and Caucasian participants used almost identical facial cues for judging trustworthiness and attractiveness.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThe results suggest that without opportunities to interact with another person extensively, we use the less racially specific and more universal attractiveness cues as a \"shortcut\" for trustworthiness judgments.",
"title": ""
},
{
"docid": "235ff4cb1c0091f95caffd528ed95755",
"text": "Natural language is a common type of input for data processing systems. Therefore, it is often required to have a large testing data set of this type. In this context, the task to automatically generate natural language texts, which maintain the properties of real texts is desirable. However, current synthetic data generators do not capture natural language text data sufficiently. In this paper, we present a preliminary study on different generative models for text generation, which maintain specific properties of natural language text, i.e., the sentiment of a review text. In a series of experiments using different data sets and sentiment analysis methods, we show that generative models can generate texts with a specific sentiment and that hidden Markov model based text generation achieves less accuracy than Markov chain based text generation, but can generate a higher number of distinct texts.",
"title": ""
},
{
"docid": "2a8c5de43ce73c360a5418709a504fa8",
"text": "The INTERSPEECH 2018 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the Atypical Affect Sub-Challenge, four basic emotions annotated in the speech of handicapped subjects have to be classified; in the Self-Assessed Affect Sub-Challenge, valence scores given by the speakers themselves are used for a three-class classification problem; in the Crying Sub-Challenge, three types of infant vocalisations have to be told apart; and in the Heart Beats Sub-Challenge, three different types of heart beats have to be determined. We describe the Sub-Challenges, their conditions, and baseline feature extraction and classifiers, which include data-learnt (supervised) feature representations by end-to-end learning, the ‘usual’ ComParE and BoAW features, and deep unsupervised representation learning using the AUDEEP toolkit for the first time in the challenge series.",
"title": ""
},
{
"docid": "f8622acd0d0c2811b6ae2d0b5d4c9a6b",
"text": "Squalene is a linear triterpene that is extensively utilized as a principal component of parenteral emulsions for drug and vaccine delivery. In this review, the chemical structure and sources of squalene are presented. Moreover, the physicochemical and biological properties of squalene-containing emulsions are evaluated in the context of parenteral formulations. Historical and current parenteral emulsion products containing squalene or squalane are discussed. The safety of squalene-based products is also addressed. Finally, analytical techniques for characterization of squalene emulsions are examined.",
"title": ""
},
{
"docid": "b657aeceeee6c29330cf45dcc40d6198",
"text": "A small form-factor 60-GHz SiGe BiCMOS radio with two antennas-in-package is presented. The fully-integrated feature-rich transceiver provides a complete RF solution for mobile WiGig/IEEE 802.11ad applications.",
"title": ""
},
{
"docid": "72e6c3c800cd981b1e1dd379d3bbf304",
"text": "Brain activity recorded noninvasively is sufficient to control a mobile robot if advanced robotics is used in combination with asynchronous electroencephalogram (EEG) analysis and machine learning techniques. Until now brain-actuated control has mainly relied on implanted electrodes, since EEG-based systems have been considered too slow for controlling rapid and complex sequences of movements. We show that two human subjects successfully moved a robot between several rooms by mental control only, using an EEG-based brain-machine interface that recognized three mental states. Mental control was comparable to manual control on the same task with a performance ratio of 0.74.",
"title": ""
},
{
"docid": "8c8ece47107bc1580e925e42d266ec87",
"text": "How do brains shape social networks, and how do social ties shape the brain? Social networks are complex webs by which ideas spread among people. Brains comprise webs by which information is processed and transmitted among neural units. While brain activity and structure offer biological mechanisms for human behaviors, social networks offer external inducers or modulators of those behaviors. Together, these two axes represent fundamental contributors to human experience. Integrating foundational knowledge from social and developmental psychology and sociology on how individuals function within dyads, groups, and societies with recent advances in network neuroscience can offer new insights into both domains. Here, we use the example of how ideas and behaviors spread to illustrate the potential of multilayer network models.",
"title": ""
},
{
"docid": "44de39859665488f8df950007d7a01c6",
"text": "Topic models provide insights into document collections, and their supervised extensions also capture associated document-level metadata such as sentiment. However, inferring such models from data is often slow and cannot scale to big data. We build upon the “anchor” method for learning topic models to capture the relationship between metadata and latent topics by extending the vector-space representation of word-cooccurrence to include metadataspecific dimensions. These additional dimensions reveal new anchor words that reflect specific combinations of metadata and topic. We show that these new latent representations predict sentiment as accurately as supervised topic models, and we find these representations more quickly without sacrificing interpretability. Topic models were introduced in an unsupervised setting (Blei et al., 2003), aiding in the discovery of topical structure in text: large corpora can be distilled into human-interpretable themes that facilitate quick understanding. In addition to illuminating document collections for humans, topic models have increasingly been used for automatic downstream applications such as sentiment analysis (Titov and McDonald, 2008; Paul and Girju, 2010; Nguyen et al., 2013). Unfortunately, the structure discovered by unsupervised topic models does not necessarily constitute the best set of features for tasks such as sentiment analysis. Consider a topic model trained on Amazon product reviews. A topic model might discover a topic about vampire romance. However, we often want to go deeper, discovering facets of a topic that reflect topic-specific sentiment, e.g., “buffy” and “spike” for positive sentiment vs. “twilight” and “cullen” for negative sentiment. Techniques for discovering such associations, called supervised topic models (Section 2), both produce interpretable topics and predict metadata values. While unsupervised topic models now have scalable inference strategies (Hoffman et al., 2013; Zhai et al., 2012), supervised topic model inference has not received as much attention and often scales poorly. The anchor algorithm is a fast, scalable unsupervised approach for finding “anchor words”—precise words with unique co-occurrence patterns that can define the topics of a collection of documents. We augment the anchor algorithm to find supervised sentiment-specific anchor words (Section 3). Our algorithm is faster and just as effective as traditional schemes for supervised topic modeling (Section 4). 1 Anchors: Speedy Unsupervised Models The anchor algorithm (Arora et al., 2013) begins with a V × V matrix Q̄ of word co-occurrences, where V is the size of the vocabulary. Each word type defines a vector Q̄i,· of length V so that Q̄i,j encodes the conditional probability of seeing word j given that word i has already been seen. Spectral methods (Anandkumar et al., 2012) and the anchor algorithm are fast alternatives to traditional topic model inference schemes because they can discover topics via these summary statistics (quadratic in the number of types) rather than examining the whole dataset (proportional to the much larger number of tokens). The anchor algorithm takes its name from the idea of anchor words—words which unambiguously identify a particular topic. For instance, “wicket” might be an anchor word for the cricket topic. Thus, for any anchor word a, Q̄a,· will look like a topic distribution. Q̄wicket,· will have high probability for “bowl”, “century”, “pitch”, and “bat”; these words are related to cricket, but they cannot be anchor words because they are also related to other topics. Because these other non-anchor words could be topically ambiguous, their co-occurrence must be explained through some combination of anchor words; thus for non-anchor word i,",
"title": ""
},
{
"docid": "8b67be5c3adac9bcdbc1aa836708987d",
"text": "The adaptive toolbox is a Darwinian-inspired theory that conceives of the mind as a modular system that is composed of heuristics, their building blocks, and evolved capacities. The study of the adaptive toolbox is descriptive and analyzes the selection and structure of heuristics in social and physical environments. The study of ecological rationality is prescriptive and identifies the structure of environments in which specific heuristics either succeed or fail. Results have been used for designing heuristics and environments to improve professional decision making in the real world.",
"title": ""
},
{
"docid": "f9d1777be40b879aee2f6e810422d266",
"text": "This study intended to examine the effect of ground colour on memory performance. Most of the past research on colour-memory relationship focus on the colour of the figure rather than the background. Based on these evidences, this study try to extend the previous works to the ground colour and how its effect memory performance based on recall rate. 90 undergraduate students will participate in this study. The experimental design will be used is multiple independent group experimental design. Fifty geometrical shapes will be used in the study phase with measurement of figure, 4.74cm x 3.39cm and ground, 19cm x 25cm. The participants will be measured on numbers of shape that are being recall in test phase in three experimental conditions, coloured background, non-coloured background and mix between coloured and non-coloured background slides condition. It is hypothesized that shape with coloured background will be recalled better than shape with non-coloured background. Analysis of variance (ANOVA) statistical procedure will be used to analyse the data of recall performance between three experimental groups using Statistical Package for Social Sciences (SPSS 17.0) to examine the cause and effect relationship between those variables.",
"title": ""
},
{
"docid": "874e60d3f37aa01d201294ed247eb6a4",
"text": "FokI is a type IIs restriction endonuclease comprised of a DNA recognition domain and a catalytic domain. The structural similarity of the FokI catalytic domain to the type II restriction endonuclease BamHI monomer suggested that the FokI catalytic domains may dimerize. In addition, the FokI structure, presented in an accompanying paper in this issue of Proceedings, reveals a dimerization interface between catalytic domains. We provide evidence here that FokI catalytic domain must dimerize for DNA cleavage to occur. First, we show that the rate of DNA cleavage catalyzed by various concentrations of FokI are not directly proportional to the protein concentration, suggesting a cooperative effect for DNA cleavage. Second, we constructed a FokI variant, FokN13Y, which is unable to bind the FokI recognition sequence but when mixed with wild-type FokI increases the rate of DNA cleavage. Additionally, the FokI catalytic domain that lacks the DNA binding domain was shown to increase the rate of wild-type FokI cleavage of DNA. We also constructed an FokI variant, FokD483A, R487A, which should be defective for dimerization because the altered residues reside at the putative dimerization interface. Consistent with the FokI dimerization model, the variant FokD483A, R487A revealed greatly impaired DNA cleavage. Based on our work and previous reports, we discuss a pathway of DNA binding, dimerization, and cleavage by FokI endonuclease.",
"title": ""
},
{
"docid": "cd92f750461aff9877853f483cf09ecf",
"text": "Designing and maintaining Web applications is one of the major challenges for the software industry of the year 2000. In this paper we present Web Modeling Language (WebML), a notation for specifying complex Web sites at the conceptual level. WebML enables the high-level description of a Web site under distinct orthogonal dimensions: its data content (structural model), the pages that compose it (composition model), the topology of links between pages (navigation model), the layout and graphic requirements for page rendering (presentation model), and the customization features for one-to-one content delivery (personalization model). All the concepts of WebML are associated with a graphic notation and a textual XML syntax. WebML specifications are independent of both the client-side language used for delivering the application to users, and of the server-side platform used to bind data to pages, but they can be effectively used to produce a site implementation in a specific technological setting. WebML guarantees a model-driven approach to Web site development, which is a key factor for defining a novel generation of CASE tools for the construction of complex sites, supporting advanced features like multi-device access, personalization, and evolution management. The WebML language and its accompanying design method are fully implemented in a pre-competitive Web design tool suite, called ToriiSoft.",
"title": ""
},
{
"docid": "42ebaee6fdbfc487ae2a21e8a55dd3e4",
"text": "Human motion prediction, forecasting human motion in a few milliseconds conditioning on a historical 3D skeleton sequence, is a long-standing problem in computer vision and robotic vision. Existing forecasting algorithms rely on extensive annotated motion capture data and are brittle to novel actions. This paper addresses the problem of few-shot human motion prediction, in the spirit of the recent progress on few-shot learning and meta-learning. More precisely, our approach is based on the insight that having a good generalization from few examples relies on both a generic initial model and an effective strategy for adapting this model to novel tasks. To accomplish this, we propose proactive and adaptive meta-learning (PAML) that introduces a novel combination of model-agnostic meta-learning and model regression networks and unifies them into an integrated, end-to-end framework. By doing so, our meta-learner produces a generic initial model through aggregating contextual information from a variety of prediction tasks, while effectively adapting this model for use as a task-specific one by leveraging learningto-learn knowledge about how to transform few-shot model parameters to many-shot model parameters. The resulting PAML predictor model significantly improves the prediction performance on the heavily benchmarked H3.6M dataset in the small-sample size regime.",
"title": ""
},
{
"docid": "eda6795cb79e912a7818d9970e8ca165",
"text": "This study aimed to examine the relationship between maximum leg extension strength and sprinting performance in youth elite male soccer players. Sixty-three youth players (12.5 ± 1.3 years) performed 5 m, flying 15 m and 20 m sprint tests and a zigzag agility test on a grass field using timing gates. Two days later, subjects performed a one-repetition maximum leg extension test (79.3 ± 26.9 kg). Weak to strong correlations were found between leg extension strength and the time to perform 5 m (r = -0.39, p = 0.001), flying 15 m (r = -0.72, p < 0.001) and 20 m (r = -0.67, p < 0.001) sprints; between body mass and 5 m (r = -0.43, p < 0.001), flying 15 m (r = -0.75, p < 0.001), 20 m (r = -0.65, p < 0.001) sprints and agility (r =-0.29, p < 0.001); and between height and 5 m (r = -0.33, p < 0.01) and flying 15 m (r = -0.74, p < 0.001) sprints. Our results show that leg muscle strength and anthropometric variables strongly correlate with sprinting ability. This suggests that anthropometric characteristics should be considered to compare among youth players, and that youth players should undergo strength training to improve running speed.",
"title": ""
},
{
"docid": "61bde9866c99e98aac813a9410d33189",
"text": ": Steganography is an art and science of writing hidden messages in such a way that no one apart from the intended recipient knows the existence of the message.The maximum number of bits that can be used for LSB audio steganography without causing noticeable perceptual distortion to the host audio signal is 4 LSBs, if 16 bits per sample audio sequences are used.We propose two novel approaches of substit ution technique of audio steganography that improves the capacity of cover audio for embedding additional data. Using these methods, message bits are embedded into multiple and variable LSBs. These methods utilize upto 7 LSBs for embedding data.Results show that both these methods improve capacity of data hiding of cover audio by 35% to 70% as compared to the standerd LSB algorithm with 4 LSBs used for data embedding. And using encryption and decryption techniques performing cryptography. So for this RSA algorithm used. KeywordsInformation hiding,Audio steganography,Least significant bit(LSB),Most significant bit(MSB)",
"title": ""
},
{
"docid": "64ae34c959e0e4c9a6a155eeb334b3ea",
"text": "Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences. In this work, we propose a model to take into account both the similarities and dissimilarities by decomposing and composing lexical semantics over sentences. The model represents each word as a vector, and calculates a semantic matching vector for each word based on all words in the other sentence. Then, each word vector is decomposed into a similar component and a dissimilar component based on the semantic matching vector. After this, a twochannel CNN model is employed to capture features by composing the similar and dissimilar components. Finally, a similarity score is estimated over the composed feature vectors. Experimental results show that our model gets the state-of-the-art performance on the answer sentence selection task, and achieves a comparable result on the paraphrase identification task.",
"title": ""
},
{
"docid": "19ea9b23f8757804c23c21293834ff3f",
"text": "We try to address the problem of document layout understanding using a simple algorithm which generalizes across multiple domains while training on just few examples per domain. We approach this problem via supervised object detection method and propose a methodology to overcome the requirement of large datasets. We use the concept of transfer learning by pre-training our object detector on a simple artificial (source) dataset and fine-tuning it on a tiny domain specific (target) dataset. We show that this methodology works for multiple domains with training samples as less as 10 documents. We demonstrate the effect of each component of the methodology in the end result and show the superiority of this methodology over simple object detectors.",
"title": ""
},
{
"docid": "b6cc41414ad1dae4ccd2fcf4df1bd3b6",
"text": "Bio-implantable sensors using radio-frequency telemetry links that enable the continuous monitoring and recording of physiological data are receiving a great deal of attention. The objective of this paper is to study the feasibility of an implantable sensor for tissue characterization. This has been done by querying an LC sensor surrounded by dispersive tissues by an external antenna. The resonant frequency of the sensor is monitored by measuring the input impedance of the antenna, and correlated to the desired quantities. Using an equivalent circuit model of the sensor that accounts for the properties of the encapsulating tissue, analytical expressions have been developed for the extraction of the tissue permittivity and conductivity. Finally, experimental validation has been performed with a telemetry link that consists of a loop antenna and a fabricated LC sensor immersed in single and multiple dispersive phantom materials.",
"title": ""
},
{
"docid": "9f84ec96cdb45bcf333db9f9459a3d86",
"text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 × 2 and 2 × 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.",
"title": ""
}
] | scidocsrr |
245bd7b11aef57b7c19cb348a24cd9dd | 3D texture analysis on MRI images of Alzheimer’s disease | [
{
"docid": "edd6d9843c8c24497efa336d1a26be9d",
"text": "Alzheimer's disease (AD) can be diagnosed with a considerable degree of accuracy. In some centers, clinical diagnosis predicts the autopsy diagnosis with 90% certainty in series reported from academic centers. The characteristic histopathologic changes at autopsy include neurofibrillary tangles, neuritic plaques, neuronal loss, and amyloid angiopathy. Mutations on chromosomes 21, 14, and 1 cause familial AD. Risk factors for AD include advanced age, lower intelligence, small head size, and history of head trauma; female gender may confer additional risks. Susceptibility genes do not cause the disease by themselves but, in combination with other genes or epigenetic factors, modulate the age of onset and increase the probability of developing AD. Among several putative susceptibility genes (on chromosomes 19, 12, and 6), the role of apolipoprotein E (ApoE) on chromosome 19 has been repeatedly confirmed. Protective factors include ApoE-2 genotype, history of estrogen replacement therapy in postmenopausal women, higher educational level, and history of use of nonsteroidal anti-inflammatory agents. The most proximal brain events associated with the clinical expression of dementia are progressive neuronal dysfunction and loss of neurons in specific regions of the brain. Although the cascade of antecedent events leading to the final common path of neurodegeneration must be determined in greater detail, the accumulation of stable amyloid is increasingly widely accepted as a central pathogenetic event. All mutations known to cause AD increase the production of beta-amyloid peptide. This protein is derived from amyloid precursor protein and, when aggregated in a beta-pleated sheet configuration, is neurotoxic and forms the core of neuritic plaques. Nerve cell loss in selected nuclei leads to neurochemical deficiencies, and the combination of neuronal loss and neurotransmitter deficits leads to the appearance of the dementia syndrome. The destructive aspects include neurochemical deficits that disrupt cell-to-cell communications, abnormal synthesis and accumulation of cytoskeletal proteins (e.g., tau), loss of synapses, pruning of dendrites, damage through oxidative metabolism, and cell death. The concepts of cognitive reserve and symptom thresholds may explain the effects of education, intelligence, and brain size on the occurrence and timing of AD symptoms. Advances in understanding the pathogenetic cascade of events that characterize AD provide a framework for early detection and therapeutic interventions, including transmitter replacement therapies, antioxidants, anti-inflammatory agents, estrogens, nerve growth factor, and drugs that prevent amyloid formation in the brain.",
"title": ""
}
] | [
{
"docid": "e7a260bfb238d8b4f147ac9c2a029d1d",
"text": "The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult the full DRO policy for further details.",
"title": ""
},
{
"docid": "81476f837dd763301ba065ac78c5bb65",
"text": "Background: The ideal lip augmentation technique provides the longest period of efficacy, lowest complication rate, and best aesthetic results. A myriad of techniques have been described for lip augmentation, but the optimal approach has not yet been established. This systematic review with metaregression will focus on the various filling procedures for lip augmentation (FPLA), with the goal of determining the optimal approach. Methods: A systematic search for all English, French, Spanish, German, Italian, Portuguese and Dutch language studies involving FPLA was performed using these databases: Elsevier Science Direct, PubMed, Highwire Press, Springer Standard Collection, SAGE, DOAJ, Sweetswise, Free E-Journals, Ovid Lippincott Williams & Wilkins, Willey Online Library Journals, and Cochrane Plus. The reference section of every study selected through this database search was subsequently examined to identify additional relevant studies. Results: The database search yielded 29 studies. Nine more studies were retrieved from the reference sections of these 29 studies. The level of evidence ratings of these 38 studies were as follows: level Ib, four studies; level IIb, four studies; level IIIb, one study; and level IV, 29 studies. Ten studies were prospective. Conclusions: This systematic review sought to highlight all the quality data currently available regarding FPLA. Because of the considerable diversity of procedures, no definitive comparisons or conclusions were possible. Additional prospective studies and clinical trials are required to more conclusively determine the most appropriate approach for this procedure. Level of evidence: IV. © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fb02f47ab50ebe817175f21f7192ae6b",
"text": "Generative Adversarial Network (GAN) is a prominent generative model that are widely used in various applications. Recent studies have indicated that it is possible to obtain fake face images with a high visual quality based on this novel model. If those fake faces are abused in image tampering, it would cause some potential moral, ethical and legal problems. In this paper, therefore, we first propose a Convolutional Neural Network (CNN) based method to identify fake face images generated by the current best method [20], and provide experimental evidences to show that the proposed method can achieve satisfactory results with an average accuracy over 99.4%. In addition, we provide comparative results evaluated on some variants of the proposed CNN architecture, including the high pass filter, the number of the layer groups and the activation function, to further verify the rationality of our method.",
"title": ""
},
{
"docid": "8d292592202c948c439f055ca5df9d56",
"text": "This paper provides an overview of the current state of the art in persuasive systems design. All peer-reviewed full papers published at the first three International Conferences on Persuasive Technology were analyzed employing a literature review framework. Results from this analysis are discussed and directions for future research are suggested. Most research papers so far have been experimental. Five out of six of these papers (84.4%) have addressed behavioral change rather than an attitude change. Tailoring, tunneling, reduction and social comparison have been the most studied methods for persuasion. Quite, surprisingly ethical considerations have remained largely unaddressed in these papers. In general, many of the research papers seem to describe the investigated persuasive systems in a relatively vague manner leaving room for some improvement.",
"title": ""
},
{
"docid": "c21280fa617bcf55991702211f1fde8b",
"text": "How useful can machine learning be in a quantum laboratory? Here we raise the question of the potential of intelligent machines in the context of scientific research. A major motivation for the present work is the unknown reachability of various entanglement classes in quantum experiments. We investigate this question by using the projective simulation model, a physics-oriented approach to artificial intelligence. In our approach, the projective simulation system is challenged to design complex photonic quantum experiments that produce high-dimensional entangled multiphoton states, which are of high interest in modern quantum experiments. The artificial intelligence system learns to create a variety of entangled states and improves the efficiency of their realization. In the process, the system autonomously (re)discovers experimental techniques which are only now becoming standard in modern quantum optical experiments-a trait which was not explicitly demanded from the system but emerged through the process of learning. Such features highlight the possibility that machines could have a significantly more creative role in future research.",
"title": ""
},
{
"docid": "df6d4e6d74d96b7ab1951cc869caad59",
"text": "A broadband commonly fed antenna with dual polarization is proposed in this letter. The main radiator of the antenna is designed as a loop formed by four staircase-like branches. In this structure, the 0° polarization and 90° polarization share the same radiator and reflector. Measurement shows that the proposed antenna obtains a broad impedance bandwidth of 70% (1.5–3.1 GHz) with <inline-formula><tex-math notation=\"LaTeX\">$\\vert {{S}}_{11}\\vert < -{\\text{10 dB}}$</tex-math></inline-formula> and a high port-to-port isolation of 35 dB. The antenna gain within the operating frequency band is between 7.2 and 9.5 dBi, which indicates a stable broadband radiation performance. Moreover, a high cross-polarization discrimination of 25 dB is achieved across the whole operating frequency band.",
"title": ""
},
{
"docid": "5c112eb4be8321d79b63790e84de278f",
"text": "Service-dominant logic continues its evolution, facilitated by an active community of scholars throughout the world. Along its evolutionary path, there has been increased recognition of the need for a crisper and more precise delineation of the foundational premises and specification of the axioms of S-D logic. It also has become apparent that a limitation of the current foundational premises/axioms is the absence of a clearly articulated specification of the mechanisms of (often massive-scale) coordination and cooperation involved in the cocreation of value through markets and, more broadly, in society. This is especially important because markets are even more about cooperation than about the competition that is more frequently discussed. To alleviate this limitation and facilitate a better understanding of cooperation (and coordination), an eleventh foundational premise (fifth axiom) is introduced, focusing on the role of institutions and institutional arrangements in systems of value cocreation: service ecosystems. Literature on institutions across multiple social disciplines, including marketing, is briefly reviewed and offered as further support for this fifth axiom.",
"title": ""
},
{
"docid": "080faec9dff683610f2e98609d53d044",
"text": "We present a system which is able to reconstruct human faces on mobile devices with only on-device processing using the sensors which are typically built into a current commodity smart phone. Such technology can for example be used for facial authentication purposes or as a fast preview for further post-processing. Our method uses recently proposed techniques which compute depth maps by passive multi-view stereo directly on the device. We propose an efficient method which recovers the geometry of the face from the typically noisy point cloud. First, we show that we can safely restrict the reconstruction to a 2.5D height map representation. Therefore we then propose a novel low dimensional height map shape model for faces which can be fitted to the input data efficiently even on a mobile phone. In order to be able to represent instance specific shape details, such as moles, we augment the reconstruction from the shape model with a distance map which can be regularized efficiently. We thoroughly evaluate our approach on synthetic and real data, thereby we use both high resolution depth data acquired using high quality multi-view stereo and depth data directly computed on mobile phones.",
"title": ""
},
{
"docid": "c694936a9b8f13654d06b72c077ed8f4",
"text": "Druid is an open source data store designed for real-time exploratory analytics on large data sets. The system combines a column-oriented storage layout, a distributed, shared-nothing architecture, and an advanced indexing structure to allow for the arbitrary exploration of billion-row tables with sub-second latencies. In this paper, we describe Druid’s architecture, and detail how it supports fast aggregations, flexible filters, and low latency data ingestion.",
"title": ""
},
{
"docid": "6e7098f39a8b860307dba52dcc7e0d42",
"text": "The paper presents an experimental algorithm to detect conventionalized metaphors implicit in the lexical data in a resource like WordNet, where metaphors are coded into the senses and so would never be detected by any algorithm based on the violation of preferences, since there would always be a constraint satisfied by such senses. We report an implementation of this algorithm, which was implemented first the preference constraints in VerbNet. We then derived in a systematic way a far more extensive set of constraints based on WordNet glosses, and with this data we reimplemented the detection algorithm and got a substantial improvement in recall. We suggest that this technique could contribute to improve the performance of existing metaphor detection strategies that do not attempt to detect conventionalized metaphors. The new WordNet-derived data is of wider significance because it also contains adjective constraints, unlike any existing lexical resource, and can be applied to any language with a semantic parser (and",
"title": ""
},
{
"docid": "fcea869f6aafdc0d341c87073422256f",
"text": "Table A1 summarizes the various characteristics of the synthetic models used in the experiments, including the number of event types, the size of the state space, whether a challenging construct is contained (loops, duplicates, nonlocal choice, and concurrency), and the entropy of the process defined by the model (estimated based on a sample of size 10,000). The original models may contain either duplicate tasks (two conceptually different transitions with the same label) or invisible tasks (transitions that have no label, as their firing is not recorded in the event log). We transformed all invisible transitions to duplicates such that, when there was an invisible task i in the original model, we added duplicates for all transitions t that, when fired, enable the invisible transition. These duplicates emulate the combined firing of t and i. Since we do not distinguish between duplicates and invisible tasks, we combined this category.",
"title": ""
},
{
"docid": "87748bcc07ab498218233645bdd4dd0c",
"text": "This paper proposes a method of recognizing and classifying the basic activities such as forward and backward motions by applying a deep learning framework on passive radio frequency (RF) signals. The echoes from the moving body possess unique pattern which can be used to recognize and classify the activity. A passive RF sensing test- bed is set up with two channels where the first one is the reference channel providing the un- altered echoes of the transmitter signals and the other one is the surveillance channel providing the echoes of the transmitter signals reflecting from the moving body in the area of interest. The echoes of the transmitter signals are eliminated from the surveillance signals by performing adaptive filtering. The resultant time series signal is classified into different motions as predicted by proposed novel method of convolutional neural network (CNN). Extensive amount of training data has been collected to train the model, which serves as a reference benchmark for the later studies in this field.",
"title": ""
},
{
"docid": "ebea79abc60a5d55d0397d21f54cc85e",
"text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.",
"title": ""
},
{
"docid": "40ca946c3cd4c8617585c648de5ce883",
"text": "Investigating the incidence, type, and preventability of adverse drug events (ADEs) and medication errors is crucial to improving the quality of health care delivery. ADEs, potential ADEs, and medication errors can be collected by extraction from practice data, solicitation of incidents from health professionals, and patient surveys. Practice data include charts, laboratory, prescription data, and administrative databases, and can be reviewed manually or screened by computer systems to identify signals. Research nurses, pharmacists, or research assistants review these signals, and those that are likely to represent an ADE or medication error are presented to reviewers who independently categorize them into ADEs, potential ADEs, medication errors, or exclusions. These incidents are also classified according to preventability, ameliorability, disability, severity, stage, and responsible person. These classifications, as well as the initial selection of incidents, have been evaluated for agreement between reviewers and the level of agreement found ranged from satisfactory to excellent (kappa = 0.32-0.98). The method of ADE and medication error detection and classification described is feasible and has good reliability. It can be used in various clinical settings to measure and improve medication safety.",
"title": ""
},
{
"docid": "2d5464b91c5e8338c9bc697d89135b49",
"text": "A new phototrophic sulfur bacterium has been isolated from a red layer in a laminated mat occurring underneath a gypsum crust in the mediterranean salterns of Salin-de-Giraud (Camargue, France). Single cells were coccus-shaped, non motile, without gas vacuoles and contained sulfur globules. Bacteriochlorophyll a and okenone were present as major photosynthetic pigments. These properties and the G+C content of DNA (65.9–66.6 mol% G+C) are typical characteristics of the genus Thiocapsa. However, the new isolate differs from known species in the genus, particularly in NaCl requirement (optimum, 7% NaCl; range, 3–20% NaCl) and some physiological characteristics. Therefore, a new species is proposed, Thiocapsa halophila, sp. nov.",
"title": ""
},
{
"docid": "477be87ed75b8245de5e084a366b7a6d",
"text": "This paper addresses the problem of using unmanned aerial vehicles for the transportation of suspended loads. The proposed solution introduces a novel control law capable of steering the aerial robot to a desired reference while simultaneously limiting the sway of the payload. The stability of the equilibrium is proven rigorously through the application of the nested saturation formalism. Numerical simulations demonstrating the effectiveness of the controller are provided.",
"title": ""
},
{
"docid": "790c3da6f3f1d5aa1cc478db5a4ac0b8",
"text": "The present article investigated the performance and corrosion behavior between Ag alloy wire bond and Al pad under molding compounds of different chlorine contents. The epoxy molding compounds (EMCs) were categorized as ultra-high chlorine, high chlorine and low chlorine, respectively, with 18.3 and 4.1 ppm chlorine contents. The ball bonds were stressed under 130°C/85%RH with biased voltage of 10V. The interfacial evolution between Ag alloy wire bond and Al pad was investigated in EMC of three chlorine contents after the biased-HAST test. The Ag bonding wires used in the plastic ball grid array (PBGA) package include low Ag wire (89wt%) and high Ag alloy wire (97wt%). The as bonded wire bond exhibits an average Ag-Al IMC thickness of ~0.56 μm in both types of Ag alloy wire. Two Cu-Al IMC layers, AgAl2 and Ag4Al, analyzed by EDX were formed after 96h of biased-HAST test. The joint failed in 96h and 480h, respectively, under high chlorine content EMC. The joint lasts longer than 1056h with low chlorine content EMC. The corrosion of IMC formed between Ag alloy wire and Al pad, occurs in the high Ag content alloy wire. The results of EDX analysis indicate that the chlorine ion diffuses from molding compound to IMC through the crack formed between IMC and Al pad. Al2O3 was formed within the IMC layer. It is believed the existence of Al2O3 accelerates the penetration of the chlorine ion and thus the corrosion.",
"title": ""
},
{
"docid": "07ff0274408e9ba5d6cd2b1a2cb7cbf8",
"text": "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99% of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82% while prior art ranges from 29-64%).",
"title": ""
},
{
"docid": "af752d0de962449acd9a22608bd7baba",
"text": "Ð R is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. R employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. R can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. R can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320Â240 resolution images on a 400 Mhz dual-Pentium II PC.",
"title": ""
}
] | scidocsrr |
69b81381861b35978d1138fab5be99ea | An interactive graph cut method for brain tumor segmentation | [
{
"docid": "85a076e58f4d117a37dfe6b3d68f5933",
"text": "We propose a new model for active contours to detect objects in a given image, based on techniques of curve evolution, Mumford-Shah (1989) functional for segmentation and level sets. Our model can detect objects whose boundaries are not necessarily defined by the gradient. We minimize an energy which can be seen as a particular case of the minimal partition problem. In the level set formulation, the problem becomes a \"mean-curvature flow\"-like evolving the active contour, which will stop on the desired boundary. However, the stopping term does not depend on the gradient of the image, as in the classical active contour models, but is instead related to a particular segmentation of the image. We give a numerical algorithm using finite differences. Finally, we present various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. Also, the initial curve can be anywhere in the image, and interior contours are automatically detected.",
"title": ""
}
] | [
{
"docid": "a73b9ce3d0808177c9f0739b67a1a3f3",
"text": "Multiword expressions (MWEs) are lexical items that can be decomposed into multiple component words, but have properties that are unpredictable with respect to their component words. In this paper we propose the first deep learning models for token-level identification of MWEs. Specifically, we consider a layered feedforward network, a recurrent neural network, and convolutional neural networks. In experimental results we show that convolutional neural networks are able to outperform the previous state-of-the-art for MWE identification, with a convolutional neural network with three hidden layers giving the best performance.",
"title": ""
},
{
"docid": "308e06ce00b1dfaf731b1a91e7c56836",
"text": "OBJECTIVE\nTo systematically review the literature regarding how statistical process control--with control charts as a core tool--has been applied to healthcare quality improvement, and to examine the benefits, limitations, barriers and facilitating factors related to such application.\n\n\nDATA SOURCES\nOriginal articles found in relevant databases, including Web of Science and Medline, covering the period 1966 to June 2004.\n\n\nSTUDY SELECTION\nFrom 311 articles, 57 empirical studies, published between 1990 and 2004, met the inclusion criteria.\n\n\nMETHODS\nA standardised data abstraction form was used for extracting data relevant to the review questions, and the data were analysed thematically.\n\n\nRESULTS\nStatistical process control was applied in a wide range of settings and specialties, at diverse levels of organisation and directly by patients, using 97 different variables. The review revealed 12 categories of benefits, 6 categories of limitations, 10 categories of barriers, and 23 factors that facilitate its application and all are fully referenced in this report. Statistical process control helped different actors manage change and improve healthcare processes. It also enabled patients with, for example asthma or diabetes mellitus, to manage their own health, and thus has therapeutic qualities. Its power hinges on correct and smart application, which is not necessarily a trivial task. This review catalogs 11 approaches to such smart application, including risk adjustment and data stratification.\n\n\nCONCLUSION\nStatistical process control is a versatile tool which can help diverse stakeholders to manage change in healthcare and improve patients' health.",
"title": ""
},
{
"docid": "e2c4f9cfce1db6282fe3a23fd5d6f3a4",
"text": "In semi-structured case-oriented business processes, the sequence of process steps is determined by case workers based on available document content associated with a case. Transitions between process execution steps are therefore case specific and depend on independent judgment of case workers. In this paper, we propose an instance-specific probabilistic process model (PPM) whose transition probabilities are customized to the semi-structured business process instance it represents. An instance-specific PPM serves as a powerful representation to predict the likelihood of different outcomes. We also show that certain instance-specific PPMs can be transformed into a Markov chain under some non-restrictive assumptions. For instance-specific PPMs that contain parallel execution of tasks, we provide an algorithm to map them to an extended space Markov chain. This way existing Markov techniques can be leveraged to make predictions about the likelihood of executing future tasks. Predictions provided by our technique could generate early alerts for case workers about the likelihood of important or undesired outcomes in an executing case instance. We have implemented and validated our approach on a simulated automobile insurance claims handling semi-structured business process. Results indicate that an instance-specific PPM provides more accurate predictions than other methods such as conditional probability. We also show that as more document data become available, the prediction accuracy of an instance-specific PPM increases.",
"title": ""
},
{
"docid": "186c928bf9f3639294bddc1b85c8c358",
"text": "Domain adaptation methods aim to learn a good prediction model in a label-scarce target domain by leveraging labeled patterns from a related source domain where there is a large amount of labeled data. However, in many practical domain adaptation learning scenarios, the feature distribution in the source domain is different from that in the target domain. In the extreme, the two distributions could differ completely when the feature representation of the source domain is totally different from that of the target domain. To address the problems of substantial feature distribution divergence across domains and heterogeneous feature representations of different domains, we propose a novel feature space independent semi-supervised kernel matching method for domain adaptation in this work. Our approach learns a prediction function on the labeled source data while mapping the target data points to similar source data points by matching the target kernel matrix to a submatrix of the source kernel matrix based on a Hilbert Schmidt Independence Criterion. We formulate this simultaneous learning and mapping process as a non-convex integer optimization problem and present a local minimization procedure for its relaxed continuous form. We evaluate the proposed kernel matching method using both cross domain sentiment classification tasks of Amazon product reviews and cross language text classification tasks of Reuters multilingual newswire stories. Our empirical results demonstrate that the proposed kernel matching method consistently and significantly outperforms comparison methods on both cross domain classification problems with homogeneous feature spaces and cross domain classification problems with heterogeneous feature spaces.",
"title": ""
},
{
"docid": "129a42c825850acd12b2f90a0c65f4ea",
"text": "Vertical fractures in teeth can present difficulties in diagnosis. There are, however, many specific clinical and radiographical signs which, when present, can alert clinicians to the existence of a fracture. In this review, the diagnosis of vertical root fractures is discussed in detail, and examples are presented of clinical and radiographic signs associated with these fractured teeth. Treatment alternatives are discussed for both posterior and anterior teeth.",
"title": ""
},
{
"docid": "209d202fd4b0e2376894345e3806bb70",
"text": "Support vector data description (SVDD) is a useful method for outlier detection and has been applied to a variety of applications. However, in the existing optimization procedure of SVDD, there are some issues which may lead to improper usage of SVDD. Some of the issues might already be known in practice, but the theoretical discussion, justification and correction are still lacking. Given the wide use of SVDD, these issues inspire us to carefully study SVDD in the view of convex optimization. In particular, we derive the dual problem with strong duality, prove theorems to handle theoretical insufficiency in the literature of SVDD, investigate some novel extensions of SVDD, and come up with an implementation of training SVDD with theoretical guarantee.",
"title": ""
},
{
"docid": "1f9d552c63f35b696d8fa0bc7d0cfc64",
"text": "Using Argumentation to Control Lexical Choice: A Functional Unification Implementation",
"title": ""
},
{
"docid": "2ae3a8bf304cfce89e8fcd331d1ec733",
"text": "Linear Discriminant Analysis (LDA) is among the most optimal dimension reduction methods for classification, which provides a high degree of class separability for numerous applications from science and engineering. However, problems arise with this classical method when one or both of the scatter matrices is singular. Singular scatter matrices are not unusual in many applications, especially for highdimensional data. For high-dimensional undersampled and oversampled problems, the classical LDA requires modification in order to solve a wider range of problems. In recent work the generalized singular value decomposition (GSVD) has been shown to mitigate the issue of singular scatter matrices, and a new algorithm, LDA/GSVD, has been shown to be very robust for many applications in machine learning. However, the GSVD inherently has a considerable computational overhead. In this paper, we propose fast algorithms based on the QR decomposition and regularization that solve the LDA/GSVD computational bottleneck. In addition, we present fast algorithms for classical LDA and regularized LDA utilizing the framework based on LDA/GSVD and preprocessing by the Cholesky decomposition. Experimental results are presented that demonstrate substantial speedup in all of classical LDA, regularized LDA, and LDA/GSVD algorithms without any sacrifice in classification performance for a wide range of machine learning applications.",
"title": ""
},
{
"docid": "71dedfe6f0df1ab1c8f44f28791db66c",
"text": "Summarizing a document requires identifying the important parts of the document with an objective of providing a quick overview to a reader. However, a long article can span several topics and a single summary cannot do justice to all the topics. Further, the interests of readers can vary and the notion of importance can change across them. Existing summarization algorithms generate a single summary and are not capable of generating multiple summaries tuned to the interests of the readers. In this paper, we propose an attention based RNN framework to generate multiple summaries of a single document tuned to different topics of interest. Our method outperforms existing baselines and our results suggest that the attention of generative networks can be successfully biased to look at sentences relevant to a topic and effectively used to generate topic-tuned summaries.",
"title": ""
},
{
"docid": "cbeaacd304c0fcb1bce3decfb8e76e33",
"text": "One of the main problems with virtual reality as a learning tool is that there are hardly any theories or models upon which to found and justify the application development. This paper presents a model that defends the metaphorical design of educational virtual reality systems. The goal is to build virtual worlds capable of embodying the knowledge to be taught: the metaphorical structuring of abstract concepts looks for bodily forms of expression in order to make knowledge accessible to students. The description of a case study aimed at learning scientific categorization serves to explain and implement the process of metaphorical projection. Our proposals are based on Lakoff and Johnson's theory of cognition, which defends the conception of the embodied mind, according to which most of our knowledge relies on basic metaphors derived from our bodily experience.",
"title": ""
},
{
"docid": "40f8240220dad82a7a2da33932fb0e73",
"text": "The incidence of clinically evident Curling's ulcer among 109 potentially salvageable severely burned patients was reviewed. These patients, who had greater than a 40 per cent body surface area burn, received one of these three treatment regimens: antacids hourly until autografting was complete, antacids hourly during the early postburn period followed by nutritional supplementation with Vivonex until autografting was complete or no antacids during the early postburn period but subsequent nutritional supplementation with Vivonex until autografting was complete. Clinically evident Curling's ulcer occurred in three patients. This incidence approximates the lowest reported among severely burned patients treated prophylactically with acid-reducing regimens to minimize clinically evident Curling's ulcer. In addition to its protective effect on Curling's ulcer, Vivonex, when used in combination with a high protein, high caloric diet, meets the caloric needs of the severely burned patient. Probably, Vivonex, which has a pH range of 4.5 to 5.4 protects against clinically evident Curling's ulcer by a dilutional alkalinization of gastric secretion.",
"title": ""
},
{
"docid": "7350c0433fe1330803403e6aa03a2f26",
"text": "An introduction is provided to Multi-Entity Bayesian Networks (MEBN), a logic system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures. Knowledge is encoded as a collection of Bayesian network fragments (MFrags) that can be instantiated and combined to form highly complex situation-specific Bayesian networks. A MEBN theory (MTheory) implicitly represents a joint probability distribution over possibly unbounded numbers of hypotheses, and uses Bayesian learning to refine a knowledge base as observations accrue. MEBN provides a logical foundation for the emerging collection of highly expressive probability-based languages. A running example illustrates the representation and reasoning power of the MEBN formalism.",
"title": ""
},
{
"docid": "1c7457ef393a604447b0478451ef0c62",
"text": "Melasma is an acquired increased pigmentation of the skin [1], a symmetric hypermelanosis, characterized by irregular light to gray brown macules. Melasma comes from the Greek word melas [= black color), formerly known as Chloasma, another Greek word meaning green color, even though the term was more often used for melasma cases during pregnancy. It is considered to be part of a large group of facial melanosis, such as Riehl’s melanosis, Lichen planuspigmentous, erythema dyschromicumperstans, erythrosis and poikiloderma of Civatte [2]. Hyperpigmented macules and patches are most commonly developed in the sun-exposed areas of the skin [3]. Melasma is considered to be a chronic acquired hypermelanosis of the skin [4], with poorly understood pathogenesis [5]. The increased pigmentation and the photo damaged features that characterize melasma include solar elastosis, even though the main pathogenesis still remains unknown [6].",
"title": ""
},
{
"docid": "65a143978c3b1980f512cfb22f176568",
"text": "Recognizing textual entailment (RTE) has been proposed as a task in computational linguistics under a successful series of annual evaluation campaigns started in 2005 with the Pascal RTE-1 shared task. RTE is defined as the capability of a system to recognize that the meaning of a portion of text (usually one or few sentences) entails the meaning of another portion of text. Subsequently, the task has also been extended to recognizing specific cases of non-entailment, as when the meaning of the first text contradicts the meaning of the second text. Although the study of entailment phenomena in natural language was addressed much earlier, the novelty of the RTE evaluation was to propose a simple text-to-text task to compare human and system judgments, making it possible to build data sets and to experiment with a variety of approaches. Two main reasons likely contributed to the success of the initiative: First, the possibility to address, for the first time, the complexity of entailment phenomena under a data-driven perspective; second, the text-to-text approach allows one to easily incorporate a textual entailment engine into applications (e.g., question answering, summarization, information extraction) as a core inferential component.",
"title": ""
},
{
"docid": "97968acf486f3f4bcdbccdfcd116dabb",
"text": "Disruption of electric power operations can be catastrophic on national security and the economy. Due to the complexity of widely dispersed assets and the interdependences among computer, communication, and power infrastructures, the requirement to meet security and quality compliance on operations is a challenging issue. In recent years, the North American Electric Reliability Corporation (NERC) established a cybersecurity standard that requires utilities' compliance on cybersecurity of control systems. This standard identifies several cyber-related vulnerabilities that exist in control systems and recommends several remedial actions (e.g., best practices). In this paper, a comprehensive survey on cybersecurity of critical infrastructures is reported. A supervisory control and data acquisition security framework with the following four major components is proposed: (1) real-time monitoring; (2) anomaly detection; (3) impact analysis; and (4) mitigation strategies. In addition, an attack-tree-based methodology for impact analysis is developed. The attack-tree formulation based on power system control networks is used to evaluate system-, scenario -, and leaf-level vulnerabilities by identifying the system's adversary objectives. The leaf vulnerability is fundamental to the methodology that involves port auditing or password strength evaluation. The measure of vulnerabilities in the power system control framework is determined based on existing cybersecurity conditions, and then, the vulnerability indices are evaluated.",
"title": ""
},
{
"docid": "e6e6eb1f1c0613a291c62064144ff0ba",
"text": "Mobile phones have become the most popular way to communicate with other individuals. While cell phones have become less of a status symbol and more of a fashion statement, they have created an unspoken social dependency. Adolescents and young adults are more likely to engage in SMS messing, making phone calls, accessing the internet from their phone or playing a mobile driven game. Once pervaded by boredom, teenagers resort to instant connection, to someone, somewhere. Sensation seeking behavior has also linked adolescents and young adults to have the desire to take risks with relationships, rules and roles. Individuals seek out entertainment and avoid boredom at all times be it appropriate or inappropriate. Cell phones are used for entertainment, information and social connectivity. It has been demonstrated that individuals with low self – esteem use cell phones to form and maintain social relationships. They form an attachment with cell phone which molded their mind that they cannot function without their cell phone on a day-to-day basis. In this context, the study attempts to examine the extent of use of mobile phone and its influence on the academic performance of the students. A face to face survey using structured questionnaire was the method used to elicit the opinions of students between the age group of 18-25 years in three cities covering all the three regions the State of Andhra Pradesh in India. The survey was administered among 1200 young adults through two stage random sampling to select the colleges and respondents from the selected colleges, with 400 from each city. In Hyderabad, 201 males and 199 females participated in the survey. In Visakhapatnam, 192 males and 208 females participated. In Tirupati, 220 males and 180 females completed the survey. Two criteria were taken into consideration while choosing the participants for the survey. The participants are college-going and were mobile phone users. Each of the survey responses was entered and analyzed using SPSS software. The Statistical Package for Social Sciences (SPSS 16) had been used to work out the distribution of samples in terms of percentages for each specified parameter.",
"title": ""
},
{
"docid": "00f9290840ba201e23d0ea6149f344e4",
"text": "Despite the plethora of security advice and online education materials offered to end-users, there exists no standard measurement tool for end-user security behaviors. We present the creation of such a tool. We surveyed the most common computer security advice that experts offer to end-users in order to construct a set of Likert scale questions to probe the extent to which respondents claim to follow this advice. Using these questions, we iteratively surveyed a pool of 3,619 computer users to refine our question set such that each question was applicable to a large percentage of the population, exhibited adequate variance between respondents, and had high reliability (i.e., desirable psychometric properties). After performing both exploratory and confirmatory factor analysis, we identified a 16-item scale consisting of four sub-scales that measures attitudes towards choosing passwords, device securement, staying up-to-date, and proactive awareness.",
"title": ""
},
{
"docid": "25b16e9fa168a58ea813110ea46c6ce8",
"text": "In many graph–mining problems, two networks from different domains have to be matched. In the absence of reliable node attributes, graph matching has to rely on only the link structures of the two networks, which amounts to a generalization of the classic graph isomorphism problem. Graph matching has applications in social–network reconciliation and de-anonymization, protein–network alignment in biology, and computer vision. The most scalable graph–matching approaches use ideas from percolation theory, where a matched node pair “infects” neighbouring pairs as additional potential matches. This class of matching algorithm requires an initial seed set of known matches to start the percolation. The size and correctness of the matching is very sensitive to the size of the seed set. In this paper, we give a new graph–matching algorithm that can operate with a much smaller seed set than previous approaches, with only a small increase in matching errors. We characterize a phase transition in matching performance as a function of the seed set size, using a random bigraph model and ideas from bootstrap percolation theory. We also show the excellent performance in matching several real large-scale social networks, using only a handful of seeds.",
"title": ""
},
{
"docid": "4b546f3bc34237d31c862576ecf63f9a",
"text": "Optimizing the internal supply chain for direct or production goods was a major element during the implementation of enterprise resource planning systems (ERP) which has taken place since the late 1980s. However, supply chains to the suppliers of indirect materials were not usually included due to low transaction volumes, low product values and low strategic importance of these goods. With the advent of the Internet, systems for streamlining indirect goods supply chains emerged and were adopted by many companies. In view of the paperprone processes in many companies, the implementation of these electronic procurement systems led to substantial improvement potentials. This research reports the quantitative and qualitative results of a benchmarking study which explores the use of the Internet in procurement (eProcurement). Among the major goals are to obtain more insight on how European and North American companies used and introduced eProcurement solutions as well as how these systems enhanced the procurement function. The analysis presents a heterogeneous picture and shows that all analyzed solutions emphasize different parts of the procurement and coordination process. Based on interviews and case studies the research proposes an initial set of generalized success factors which may improve future implementations and stimulate further success factor research.",
"title": ""
}
] | scidocsrr |
30299681fe7d92626a84c1b1a6b7deac | Deep learning for tactile understanding from visual and haptic data | [
{
"docid": "14658e1be562a01c1ba8338f5e87020b",
"text": "This paper discusses a novel approach in developing a texture sensor emulating the major features of a human finger. The aim of this study is to realize precise and quantitative texture sensing. Three physical properties, roughness, softness, and friction are known to constitute texture perception of humans. The sensor is designed to measure the three specific types of information by adopting the mechanism of human texture perception. First, four features of the human finger that were focused on in designing the novel sensor are introduced. Each feature is considered to play an important role in texture perception; the existence of nails and bone, the multiple layered structure of soft tissue, the distribution of mechanoreceptors, and the deployment of epidermal ridges. Next, detailed design of the texture sensor based on the design concept is explained, followed by evaluating experiments and analysis of the results. Finally, we conducted texture perceptive experiments of actual material using the developed sensor, thus achieving the information expected. Results show the potential of our approach.",
"title": ""
}
] | [
{
"docid": "2f110c5f312ceefdf6c1ea1fd78a361f",
"text": "Enrollments in introductory computer science courses are growing rapidly, thereby taxing scarce teaching resources and motivating the increased use of automated tools for program grading. Such tools commonly rely on regression testing methods from industry. However, the goals of automated grading differ from those of testing for software production. In academia, a primary motivation for testing is to provide timely and accurate feedback to students so that they can understand and fix defects in their programs. Testing strategies for program grading are therefore distinct from those of traditional software testing. This paper enumerates and describes a number of testing strategies that improve the quality of feedback for different types of programming assignments.",
"title": ""
},
{
"docid": "5b3ca1cc607d2e8f0394371f30d9e83a",
"text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.",
"title": ""
},
{
"docid": "bb8115f8c172e22bd0ff70bd079dfa98",
"text": "This paper reports on the second generation of the Pleated Pneumatic Artificial Muscle (PPAM) which has been developed to extend the life span of its first prototype. This type of artificial was developed to overcome dry friction and material deformation which is present in the widely used McKibben type of artificial muscle. The essence of the PPAM is its pleated membrane structure which enables the muscle to work at low pressures and at large contractions. There is a growing interest in this kind of actuation for robotics applications due to its high power to weight ratio and the adaptable compliance, especially for legged locomotion and robot applications in direct contact with a human. This paper describes the design of the second generation PPAM, for which specifically the membrane layout has been changed. In function of this new layout the mathematical model, developed for the first prototype, has been reformulated. This paper gives an elaborate discussion on this mathematical model which represents the force generation and enclosed muscle volume. Static load tests on some real muscles, which have been carried out in order to validate the mathematical model, are then discussed. Furthermore are given two robotic applications which currently use these pneumatic artificial muscles. One is the biped Lucy and the another one is a manipulator application which works in direct contact with an operator.",
"title": ""
},
{
"docid": "2d22631dcbbae408e0856b414c2f7d8e",
"text": "During the past few years, interest in convolutional neural networks (CNNs) has risen constantly, thanks to their excellent performance on a wide range of recognition and classification tasks. However, they suffer from the high level of complexity imposed by the high-dimensional convolutions in convolutional layers. Within scenarios with limited hardware resources and tight power and latency constraints, the high computational complexity of CNNs makes them difficult to be exploited. Hardware solutions have striven to reduce the power consumption using low-power techniques, and to limit the processing time by increasing the number of processing elements (PEs). While most of ASIC designs claim a peak performance of a few hundred giga operations per seconds, their average performance is substantially lower when applied to state-of-the-art CNNs such as AlexNet, VGGNet and ResNet, leading to low resource utilization. Their performance efficiency is limited to less than 55% on average, which leads to unnecessarily high processing latency and silicon area. In this paper, we propose a dataflow which enables to perform both the fully-connected and convolutional computations for any filter/layer size using the same PEs. We then introduce a multi-mode inference engine (MMIE) based on the proposed dataflow. Finally, we show that the proposed MMIE achieves a performance efficiency of more than 84% when performing the computations of the three renown CNNs (i.e., AlexNet, VGGNet and ResNet), outperforming the best architecture in the state-of-the-art in terms of energy consumption, processing latency and silicon area.",
"title": ""
},
{
"docid": "22e3a0e31a70669f311fb51663a76f9c",
"text": "A communication infrastructure is an essential part to the success of the emerging smart grid. A scalable and pervasive communication infrastructure is crucial in both construction and operation of a smart grid. In this paper, we present the background and motivation of communication infrastructures in smart grid systems. We also summarize major requirements that smart grid communications must meet. From the experience of several industrial trials on smart grid with communication infrastructures, we expect that the traditional carbon fuel based power plants can cooperate with emerging distributed renewable energy such as wind, solar, etc, to reduce the carbon fuel consumption and consequent green house gas such as carbon dioxide emission. The consumers can minimize their expense on energy by adjusting their intelligent home appliance operations to avoid the peak hours and utilize the renewable energy instead. We further explore the challenges for a communication infrastructure as the part of a complex smart grid system. Since a smart grid system might have over millions of consumers and devices, the demand of its reliability and security is extremely critical. Through a communication infrastructure, a smart grid can improve power reliability and quality to eliminate electricity blackout. Security is a challenging issue since the on-going smart grid systems facing increasing vulnerabilities as more and more automation, remote monitoring/controlling and supervision entities are interconnected.",
"title": ""
},
{
"docid": "c6b656cdec127997a5baf7228e530b02",
"text": "There are many scholarly articles in the literature sources that refer to the employee performance evaluation topic. Many scholars, for example, describe relations between employees’ job satisfaction, or motivation, and their performance. Others deal with the performance evaluation of the whole organization where they include tangible and intangible metrics. However, only few of them provide with such an employee performance evaluation model that could be practically applied in the companies as a reference. The main purpose of this paper is to explain one such practical model in the form of a standard document procedure which can serve as an example to follow it in the companies of different types. The model incorporates employee performance and compensation policy and is based on the five questions that represent the guiding principles, as well. The practical employee performance evaluation model and standard procedure will be explained based on the information and experience from a middle-sized industrial organization located in the Slovak republic.",
"title": ""
},
{
"docid": "fba55845801b1d145ff45b47efce8155",
"text": "This paper presents a technique for substantially reducing the noise of a CMOS low noise amplifier implemented in the inductive source degeneration topology. The effects of the gate induced current noise on the noise performance are taken into account, and the total output noise is strongly reduced by inserting a capacitance of appropriate value in parallel with the amplifying MOS transistor of the LNA. As a result, very low noise figures become possible already at very low power consumption levels.",
"title": ""
},
{
"docid": "b200a40d95e184e486a937901c606e12",
"text": "0749-5978/$ see front matter 2008 Elsevier Inc. A doi:10.1016/j.obhdp.2008.06.003 * Corresponding author. E-mail address: sthau@london.edu (S. Thau). Based on uncertainty management theory [Lind, E. A., & Van den Bos, K., (2002). When fairness works: Toward a general theory of uncertainty management. In Staw, B. M., & Kramer, R. M. (Eds.), Research in organizational behavior (Vol. 24, pp. 181–223). Greenwich, CT: JAI Press.], two studies tested whether a management style depicting situational uncertainty moderates the relationship between abusive supervision and workplace deviance. Study 1, using survey data from 379 subordinates of various industries, found that the positive relationship between abusive supervision and organizational deviance was stronger when authoritarian management style was low (high situational uncertainty) rather than high (low situational uncertainty). No significant interaction effect was found on interpersonal deviance. Study 2, using survey data from 1477 subordinates of various industries, found that the positive relationship between abusive supervision and supervisor-directed and organizational deviance was stronger when employees’ perceptions of their organization’s management style reflected high rather than low situational uncertainty. 2008 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "34149311075a7f564abe632adbbed521",
"text": "This paper presents a high-gain broadband suspended plate antenna for indoor wireless access point applications. This antenna consists of two layers operating at two adjacent bands. The bottom plate is fed by a tapered down strip excited by a probe through an SMA RF connector. The top plate is shorted to a ground plane by a strip electromagnetically coupled with the feed-strip. The design is carried out using a commercial EM software package, and validated experimentally. The measured result shows that the antenna achieves a broad operational bandwidth of 66%, suitable for access points in WiFi (2.4-2.485 GHz) and WiMAX (2.3-2.7 GHz and the 3.4-3.6 GHz) systems (IEEE 802.11b/g and IEEE 802.16-2004/e). The measured antenna gain varies from 7.7-9.5 dBi across the frequency bands of interest. A parametric study of this antenna is also conducted.",
"title": ""
},
{
"docid": "698fb992c5ff7ecc8d2e153f6b385522",
"text": "We investigate bag-of-visual-words (BOVW) approaches to land-use classification in high-resolution overhead imagery. We consider a standard non-spatial representation in which the frequencies but not the locations of quantized image features are used to discriminate between classes analogous to how words are used for text document classification without regard to their order of occurrence. We also consider two spatial extensions, the established spatial pyramid match kernel which considers the absolute spatial arrangement of the image features, as well as a novel method which we term the spatial co-occurrence kernel that considers the relative arrangement. These extensions are motivated by the importance of spatial structure in geographic data.\n The methods are evaluated using a large ground truth image dataset of 21 land-use classes. In addition to comparisons with standard approaches, we perform extensive evaluation of different configurations such as the size of the visual dictionaries used to derive the BOVW representations and the scale at which the spatial relationships are considered.\n We show that even though BOVW approaches do not necessarily perform better than the best standard approaches overall, they represent a robust alternative that is more effective for certain land-use classes. We also show that extending the BOVW approach with our proposed spatial co-occurrence kernel consistently improves performance.",
"title": ""
},
{
"docid": "f5d58660137891111a009bc841950ad2",
"text": "Lateral brow ptosis is a common aging phenomenon, contributing to the lateral upper eyelid hooding, in addition to dermatochalasis. Lateral brow lift complements upper blepharoplasty in achieving a youthful periorbital appearance. In this study, the author reports his experience in utilizing a temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia. A retrospective analysis of all patients undergoing the proposed technique by one surgeon from 2009 to 2016 was conducted. Additional procedures were recorded. Preoperative and postoperative photographs at the longest follow-up visit were used for analysis. Operation was performed under local anesthesia. The surgical technique included a temporal (pretrichial) incision with subcutaneous dissection toward the lateral brow, with superolateral lift and closure. Total of 45 patients (44 females, 1 male; mean age: 58 years) underwent the temporal (pretrichial) subcutaneous lateral brow lift technique under local anesthesia in office setting. The procedure was unilateral in 4 cases. Additional procedures included upper blepharoplasty (38), ptosis surgery (16), and lower blepharoplasty (24). Average follow-up time was 1 year (range, 6 months to 5 years). All patients were satisfied with the eyebrow contour and scar appearance. One patient required additional brow lift on one side for asymmetry. There were no cases of frontal nerve paralysis. In conclusion, the temporal (pretrichial) subcutaneous approach is an effective, safe technique for lateral brow lift/contouring, which can be performed under local anesthesia. It is ideal for women. Additional advantages include ease of operation, cost, and shortening the hairline (if necessary).",
"title": ""
},
{
"docid": "a3be253034ffcf61a25ad265fda1d4ff",
"text": "With the development of automated logistics systems, flexible manufacture systems (FMS) and unmanned automated factories, the application of automated guided vehicle (AGV) gradually become more important to improve production efficiency and logistics automatism for enterprises. The development of the AGV systems play an important role in reducing labor cost, improving working conditions, unifying information flow and logistics. Path planning has been a key issue in AGV control system. In this paper, two key problems, shortest time path planning and collision in multi AGV have been solved. An improved A-Star (A*) algorithm is proposed, which introduces factors of turning, and edge removal based on the improved A* algorithm is adopted to solve k shortest path problem. Meanwhile, a dynamic path planning method based on A* algorithm which searches effectively the shortest-time path and avoids collision has been presented. Finally, simulation and experiment have been conducted to prove the feasibility of the algorithm.",
"title": ""
},
{
"docid": "9be50791156572e6e1a579952073d810",
"text": "A synthetic aperture radar (SAR) raw data simulator is an important tool for testing the system parameters and the imaging algorithms. In this paper, a scene raw data simulator based on an inverse ω-k algorithm for bistatic SAR of a translational invariant case is proposed. The differences between simulations of monostatic and bistatic SAR are also described. The algorithm proposed has high precision and can be used in long-baseline configuration and for single-pass interferometry. Implementation details are described, and plenty of simulation results are provided to validate the algorithm.",
"title": ""
},
{
"docid": "791294c45e63b104b289b52b58512877",
"text": "Open source software (OSS) development teams use electronic means, such as emails, instant messaging, or forums, to conduct open and public discussions. Researchers investigated mailing lists considering them as a hub for project communication. Prior work focused on specific aspects of emails, for example the handling of patches, traceability concerns, or social networks. This led to insights pertaining to the investigated aspects, but not to a comprehensive view of what developers communicate about. Our objective is to increase the understanding of development mailing lists communication. We quantitatively and qualitatively analyzed a sample of 506 email threads from the development mailing list of a major OSS project, Lucene. Our investigation reveals that implementation details are discussed only in about 35% of the threads, and that a range of other topics is discussed. Moreover, core developers participate in less than 75% of the threads. We observed that the development mailing list is not the main player in OSS project communication, as it also includes other channels such as the issue repository.",
"title": ""
},
{
"docid": "539c3b253a18f32064935217f6b0ea67",
"text": "Salient object detection is not a pure low-level, bottom-up process. Higher-level knowledge is important even for task-independent image saliency. We propose a unified model to incorporate traditional low-level features with higher-level guidance to detect salient objects. In our model, an image is represented as a low-rank matrix plus sparse noises in a certain feature space, where the non-salient regions (or background) can be explained by the low-rank matrix, and the salient regions are indicated by the sparse noises. To ensure the validity of this model, a linear transform for the feature space is introduced and needs to be learned. Given an image, its low-level saliency is then extracted by identifying those sparse noises when recovering the low-rank matrix. Furthermore, higher-level knowledge is fused to compose a prior map, and is treated as a prior term in the objective function to improve the performance. Extensive experiments show that our model can comfortably achieves comparable performance to the existing methods even without the help from high-level knowledge. The integration of top-down priors further improves the performance and achieves the state-of-the-art. Moreover, the proposed model can be considered as a prototype framework not only for general salient object detection, but also for potential task-dependent saliency applications.",
"title": ""
},
{
"docid": "24bb26da0ce658ff075fc89b73cad5af",
"text": "Recent trends in robot learning are to use trajectory-based optimal control techniques and reinforcement learning to scale complex robotic systems. On the one hand, increased computational power and multiprocessing, and on the other hand, probabilistic reinforcement learning methods and function approximation, have contributed to a steadily increasing interest in robot learning. Imitation learning has helped significantly to start learning with reasonable initial behavior. However, many applications are still restricted to rather lowdimensional domains and toy applications. Future work will have to demonstrate the continual and autonomous learning abilities, which were alluded to in the introduction.",
"title": ""
},
{
"docid": "8fd28fb7c30c3dc30d4a92f95d38c966",
"text": "In recent years, iris recognition is becoming a very active topic in both research and practical applications. However, fake iris is a potential threat there are potential threats for iris-based systems. This paper presents a novel fake iris detection method based on the analysis of 2-D Fourier spectra together with iris image quality assessment. First, image quality assessment method is used to exclude the defocused, motion blurred fake iris. Then statistical properties of Fourier spectra for fake iris are used for clear fake iris detection. Experimental results show that the proposed method can detect photo iris and printed iris effectively.",
"title": ""
},
{
"docid": "408f58b7dd6cb1e6be9060f112773888",
"text": "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.",
"title": ""
},
{
"docid": "011332e3d331d461e786fd2827b0434d",
"text": "In this manuscript we present various robust statistical methods popular in the social sciences, and show how to apply them in R using the WRS2 package available on CRAN. We elaborate on robust location measures, and present robust t-test and ANOVA versions for independent and dependent samples, including quantile ANOVA. Furthermore, we present on running interval smoothers as used in robust ANCOVA, strategies for comparing discrete distributions, robust correlation measures and tests, and robust mediator models.",
"title": ""
},
{
"docid": "7b552767a37a7d63591471195b2e002b",
"text": "Point-of-interest (POI) recommendation, which helps mobile users explore new places, has become an important location-based service. Existing approaches for POI recommendation have been mainly focused on exploiting the information about user preferences, social influence, and geographical influence. However, these approaches cannot handle the scenario where users are expecting to have POI recommendation for a specific time period. To this end, in this paper, we propose a unified recommender system, named the 'Where and When to gO' (WWO) recommender system, to integrate the user interests and their evolving sequential preferences with temporal interval assessment. As a result, the WWO system can make recommendations dynamically for a specific time period and the traditional POI recommender system can be treated as the special case of the WWO system by setting this time period long enough. Specifically, to quantify users' sequential preferences, we consider the distributions of the temporal intervals between dependent POIs in the historical check-in sequences. Then, to estimate the distributions with only sparse observations, we develop the low-rank graph construction model, which identifies a set of bi-weighted graph bases so as to learn the static user preferences and the dynamic sequential preferences in a coherent way. Finally, we evaluate the proposed approach using real-world data sets from several location-based social networks (LBSNs). The experimental results show that our method outperforms the state-of-the-art approaches for POI recommendation in terms of various metrics, such as F-measure and NDCG, with a significant margin.",
"title": ""
}
] | scidocsrr |
4161d52a643d1366f0606add5d1cb4ea | Exhaustive search algorithms to mine subgroups on Big Data using Apache Spark | [
{
"docid": "55b405991dc250cd56be709d53166dca",
"text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.",
"title": ""
}
] | [
{
"docid": "172a35c941407bb09c8d41953dfc6d37",
"text": "Multi-task learning (MTL) is a machine learning paradigm that improves the performance of each task by exploiting useful information contained in multiple related tasks. However, the relatedness of tasks can be exploited by attackers to launch data poisoning attacks, which has been demonstrated a big threat to single-task learning. In this paper, we provide the first study on the vulnerability of MTL. Specifically, we focus on multi-task relationship learning (MTRL) models, a popular subclass of MTL models where task relationships are quantized and are learned directly from training data. We formulate the problem of computing optimal poisoning attacks on MTRL as a bilevel program that is adaptive to arbitrary choice of target tasks and attacking tasks. We propose an efficient algorithm called PATOM for computing optimal attack strategies. PATOM leverages the optimality conditions of the subproblem of MTRL to compute the implicit gradients of the upper level objective function. Experimental results on realworld datasets show that MTRL models are very sensitive to poisoning attacks and the attacker can significantly degrade the performance of target tasks, by either directly poisoning the target tasks or indirectly poisoning the related tasks exploiting the task relatedness. We also found that the tasks being attacked are always strongly correlated, which provides a clue for defending against such attacks.",
"title": ""
},
{
"docid": "164fd7be21190314a27bacb4dec522c5",
"text": "The relative ineffectiveness of information retrieval systems is largely caused by the inaccuracy with which a query formed by a few keywords models the actual user information need. One well known method to overcome this limitation is automatic query expansion (AQE), whereby the user’s original query is augmented by new features with a similar meaning. AQE has a long history in the information retrieval community but it is only in the last years that it has reached a level of scientific and experimental maturity, especially in laboratory settings such as TREC. This survey presents a unified view of a large number of recent approaches to AQE that leverage various data sources and employ very different principles and techniques. The following questions are addressed. Why is query expansion so important to improve search effectiveness? What are the main steps involved in the design and implementation of an AQE component? What approaches to AQE are available and how do they compare? Which issues must still be resolved before AQE becomes a standard component of large operational information retrieval systems (e.g., search engines)?",
"title": ""
},
{
"docid": "4b6a4f9d91bc76c541f4879a1a684a3f",
"text": "Query auto-completion (QAC) is one of the most prominent features of modern search engines. The list of query candidates is generated according to the prefix entered by the user in the search box and is updated on each new key stroke. Query prefixes tend to be short and ambiguous, and existing models mostly rely on the past popularity of matching candidates for ranking. However, the popularity of certain queries may vary drastically across different demographics and users. For instance, while instagram and imdb have comparable popularities overall and are both legitimate candidates to show for prefix i, the former is noticeably more popular among young female users, and the latter is more likely to be issued by men.\n In this paper, we present a supervised framework for personalizing auto-completion ranking. We introduce a novel labelling strategy for generating offline training labels that can be used for learning personalized rankers. We compare the effectiveness of several user-specific and demographic-based features and show that among them, the user's long-term search history and location are the most effective for personalizing auto-completion rankers. We perform our experiments on the publicly available AOL query logs, and also on the larger-scale logs of Bing. The results suggest that supervised rankers enhanced by personalization features can significantly outperform the existing popularity-based base-lines, in terms of mean reciprocal rank (MRR) by up to 9%.",
"title": ""
},
{
"docid": "2c8dc61a5dbdfcf8f086a5e6a0d920c1",
"text": "This work achieves a two-and-a-half-dimensional (2.5D) wafer-level radio frequency (RF) energy harvesting rectenna module with a compact size and high power conversion efficiency (PCE) that integrates a 2.45 GHz antenna in an integrated passive device (IPD) and a rectifier in a tsmcTM 0.18 μm CMOS process. The proposed rectifier provides a master-slave voltage doubling full-wave topology which can reach relatively high PCE by means of a relatively simple circuitry. The IPD antenna was stacked on top of the CMOS rectifier. The rectenna (including an antenna and rectifier) achieves an output voltage of 1.2 V and PCE of 47 % when the operation frequency is 2.45 GHz, with −12 dBm input power. The peak efficiency of the circuit is 83 % with −4 dBm input power. The die size of the RF harvesting module is less than 1 cm2. The performance of this module makes it possible to energy mobile device and it is also very suitable for wearable and implantable wireless sensor networks (WSN).",
"title": ""
},
{
"docid": "de24242bef4464a0126ce3806b795ac8",
"text": "Music must first be defined and distinguished from speech, and from animal and bird cries. We discuss the stages of hominid anatomy that permit music to be perceived and created, with the likelihood of both Homo neanderthalensis and Homo sapiens both being capable. The earlier hominid ability to emit sounds of variable pitch with some meaning shows that music at its simplest level must have predated speech. The possibilities of anthropoid motor impulse suggest that rhythm may have preceded melody, though full control of rhythm may well not have come any earlier than the perception of music above. There are four evident purposes for music: dance, ritual, entertainment personal, and communal, and above all social cohesion, again on both personal and communal levels. We then proceed to how instruments began, with a brief survey of the surviving examples from the Mousterian period onward, including the possible Neanderthal evidence and the extent to which they showed “artistic” potential in other fields. We warn that our performance on replicas of surviving instruments may bear little or no resemblance to that of the original players. We continue with how later instruments, strings, and skin-drums began and developed into instruments we know in worldwide cultures today. The sound of music is then discussed, scales and intervals, and the lack of any consistency of consonant tonality around the world. This is followed by iconographic evidence of the instruments of later antiquity into the European Middle Ages, and finally, the history of public performance, again from the possibilities of early humanity into more modern times. This paper draws the ethnomusicological perspective on the entire development of music, instruments, and performance, from the times of H. neanderthalensis and H. sapiens into those of modern musical history, and it is written with the deliberate intention of informing readers who are without special education in music, and providing necessary information for inquiries into the origin of music by cognitive scientists.",
"title": ""
},
{
"docid": "5c690df3977b078243b9cb61e5e712a6",
"text": "Computing indirect illumination is a challenging and complex problem for real-time rendering in 3D applications. We present a global illumination approach that computes indirect lighting in real time using a simplified version of the outgoing radiance and the scene stored in voxels. This approach comprehends two-bounce indirect lighting for diffuse, specular and emissive materials. Our voxel structure is based on a directional hierarchical structure stored in 3D textures with mipmapping, the structure is updated in real time utilizing the GPU which enables us to approximate indirect lighting for dynamic scenes. Our algorithm employs a voxel-light pass which calculates voxel direct and global illumination for the simplified outgoing radiance. We perform voxel cone tracing within this voxel structure to approximate different lighting phenomena such as ambient occlusion, soft shadows and indirect lighting. We demonstrate with different tests that our developed approach is capable to compute global illumination of complex scenes on interactive times.",
"title": ""
},
{
"docid": "2dd4a6736fcbd3bbb5b126f3ffcdda10",
"text": "Recent research leverages results from the continuous-armed bandit literature to create a reinforcement-learning algorithm for continuous state and action spaces. Initially proposed in a theoretical setting, we provide the first examination of the empirical properties of the algorithm. Through experimentation, we demonstrate the effectiveness of this planning method when coupled with exploration and model learning and show that, in addition to its formal guarantees, the approach is very competitive with other continuous-action reinforcement",
"title": ""
},
{
"docid": "38c5b9a1f696e060c4cda4cc19b6fa96",
"text": "This study aims to give information about the effect of green marketing on customers purchasing behaviors. First of all, environment and environmental problems, one of the reason why the green marketing emerged, are mentioned, and then the concepts of green marketing and green consumer are explained. Then together with the hypothesis developed literature review has been continued and studies conducted on this subject until now were mentioned. In the last section, moreover, questionnaire results conducted on 540 consumers in Istanbul are evaluated statistically. According to the results of the analysis, environmental awareness, green product features, green promotion activities and green price affect green purchasing behaviors of the consumers in positive way. Demographic characteristics have moderate affect on model.",
"title": ""
},
{
"docid": "ec7931f1a56bf7d4dd6cc1a5cb2d0625",
"text": "Modern life is intimately linked to the availability of fossil fuels, which continue to meet the world's growing energy needs even though their use drives climate change, exhausts finite reserves and contributes to global political strife. Biofuels made from renewable resources could be a more sustainable alternative, particularly if sourced from organisms, such as algae, that can be farmed without using valuable arable land. Strain development and process engineering are needed to make algal biofuels practical and economically viable.",
"title": ""
},
{
"docid": "f88b8c7cbabda618f59e75357c1d8262",
"text": "A security sandbox is a technology that is often used to detect advanced malware. However, current sandboxes are highly dependent on VM hypervisor types and versions. Thus, in this paper, we introduce a new sandbox design, using memory forensics techniques, to provide an agentless sandbox solution that is independent of the VM hypervisor. In particular, we leverage the VM introspection method to monitor malware running memory data outside the VM and analyze its system behaviors, such as process, file, registry, and network activities. We evaluate the feasibility of this method using 20 advanced and 8 script-based malware samples. We furthermore demonstrate how to analyze malware behavior from memory and verify the results with three different sandbox types. The results show that we can analyze suspicious malware activities, which is also helpful for cyber security defense.",
"title": ""
},
{
"docid": "7624a6ca581c0096c6e5bc484a3d772e",
"text": "We describe two systems for text simplification using typed dependency structures, one that performs lexical and syntactic simplification, and another that performs sentence compression optimised to satisfy global text constraints such as lexical density, the ratio of difficult words, and text length. We report a substantial evaluation that demonstrates the superiority of our systems, individually and in combination, over the state of the art, and also report a comprehension based evaluation of contemporary automatic text simplification systems with target non-native readers.",
"title": ""
},
{
"docid": "f4720df58360b726bf2a128547f6d9d1",
"text": "Iris texture is commonly thought to be highly discriminative between eyes and stable over individual lifetime, which makes iris particularly suitable for personal identification. However, iris texture also contains more information related to genes, which has been demonstrated by successful use of ethnic and gender classification based on iris. In this paper, we propose a novel ethnic classification method based on supervised codebook optimizing and Locality-constrained Linear Coding (LLC). The optimized codebook is composed of codes which are distinctive or mutual. Iris images from Asian and non-Asian are classified into two classes in experiments. Extensive experimental results show that the proposed method achieves encouraging classification rate and largely improves the ethnic classification performance comparing to existing algorithms.",
"title": ""
},
{
"docid": "55eec4fc4a211cee6b735d1884310cc0",
"text": "Understanding driving behaviors is essential for improving safety and mobility of our transportation systems. Data is usually collected via simulator-based studies or naturalistic driving studies. Those techniques allow for understanding relations between demographics, road conditions and safety. On the other hand, they are very costly and time consuming. Thanks to the ubiquity of smartphones, we have an opportunity to substantially complement more traditional data collection techniques with data extracted from phone sensors, such as GPS, accelerometer gyroscope and camera. We developed statistical models that provided insight into driver behavior in the San Francisco metro area based on tens of thousands of driver logs. We used novel data sources to support our work. We used cell phone sensor data drawn from five hundred drivers in San Francisco to understand the speed of traffic across the city as well as the maneuvers of drivers in different areas. Specifically, we clustered drivers based on their driving behavior. We looked at driver norms by street and flagged driving behaviors that deviated from the norm.",
"title": ""
},
{
"docid": "c9b7ddb6eb1431fcc508d29a1f25104b",
"text": "The problem of finding the missing values of a matrix given a few of its entries, called matrix completion, has gathered a lot of attention in the recent years. Although the problem under the standard low rank assumption is NP-hard, Candès and Recht showed that it can be exactly relaxed if the number of observed entries is sufficiently large. In this work, we introduce a novel matrix completion model that makes use of proximity information about rows and columns by assuming they form communities. This assumption makes sense in several real-world problems like in recommender systems, where there are communities of people sharing preferences, while products form clusters that receive similar ratings. Our main goal is thus to find a low-rank solution that is structured by the proximities of rows and columns encoded by graphs. We borrow ideas from manifold learning to constrain our solution to be smooth on these graphs, in order to implicitly force row and column proximities. Our matrix recovery model is formulated as a convex non-smooth optimization problem, for which a well-posed iterative scheme is provided. We study and evaluate the proposed matrix completion on synthetic and real data, showing that the proposed structured low-rank recovery model outperforms the standard matrix completion model in many situations.",
"title": ""
},
{
"docid": "fdb9da0c4b6225c69de16411c79ac9dc",
"text": "Phylogenetic analyses reveal the evolutionary derivation of species. A phylogenetic tree can be inferred from multiple sequence alignments of proteins or genes. The alignment of whole genome sequences of higher eukaryotes is a computational intensive and ambitious task as is the computation of phylogenetic trees based on these alignments. To overcome these limitations, we here used an alignment-free method to compare genomes of the Brassicales clade. For each nucleotide sequence a Chaos Game Representation (CGR) can be computed, which represents each nucleotide of the sequence as a point in a square defined by the four nucleotides as vertices. Each CGR is therefore a unique fingerprint of the underlying sequence. If the CGRs are divided by grid lines each grid square denotes the occurrence of oligonucleotides of a specific length in the sequence (Frequency Chaos Game Representation, FCGR). Here, we used distance measures between FCGRs to infer phylogenetic trees of Brassicales species. Three types of data were analyzed because of their different characteristics: (A) Whole genome assemblies as far as available for species belonging to the Malvidae taxon. (B) EST data of species of the Brassicales clade. (C) Mitochondrial genomes of the Rosids branch, a supergroup of the Malvidae. The trees reconstructed based on the Euclidean distance method are in general agreement with single gene trees. The Fitch-Margoliash and Neighbor joining algorithms resulted in similar to identical trees. Here, for the first time we have applied the bootstrap re-sampling concept to trees based on FCGRs to determine the support of the branchings. FCGRs have the advantage that they are fast to calculate, and can be used as additional information to alignment based data and morphological characteristics to improve the phylogenetic classification of species in ambiguous cases.",
"title": ""
},
{
"docid": "898efbe8e80d29b1a10e1bed90852dbc",
"text": "The aim of this work is to investigate the effectiveness of novel human-machine interaction paradigms for eHealth applications. In particular, we propose to replace usual human-machine interaction mechanisms with an approach that leverages a chat-bot program, opportunely designed and trained in order to act and interact with patients as a human being. Moreover, we have validated the proposed interaction paradigm in a real clinical context, where the chat-bot has been employed within a medical decision support system having the goal of providing useful recommendations concerning several disease prevention pathways. More in details, the chat-bot has been realized to help patients in choosing the most proper disease prevention pathway by asking for different information (starting from a general level up to specific pathways questions) and to support the related prevention check-up and the final diagnosis. Preliminary experiments about the effectiveness of the proposed approach are reported.",
"title": ""
},
{
"docid": "9e5c123b6f744037436e0d5c917e8640",
"text": "Relational databases have limited support for data collaboration, where teams collaboratively curate and analyze large datasets. Inspired by software version control systems like git, we propose (a) a dataset version control system, giving users the ability to create, branch, merge, difference and search large, divergent collections of datasets, and (b) a platform, DATAHUB, that gives users the ability to perform collaborative data analysis building on this version control system. We outline the challenges in providing dataset version control at scale.",
"title": ""
},
{
"docid": "a62c1426e09ab304075e70b61773914f",
"text": "Converting a scanned or shot line drawing image into a vector graph can facilitate further editand reuse, making it a hot research topic in computer animation and image processing. Besides avoiding noiseinfluence, its main challenge is to preserve the topological structures of the original line drawings, such as linejunctions, in the procedure of obtaining a smooth vector graph from a rough line drawing. In this paper, wepropose a vectorization method of line drawings based on junction analysis, which retains the original structureunlike done by existing methods. We first combine central line tracking and contour tracking, which allowsus to detect the encounter of line junctions when tracing a single path. Then, a junction analysis approachbased on intensity polar mapping is proposed to compute the number and orientations of junction branches.Finally, we make use of bending degrees of contour paths to compute the smoothness between adjacent branches,which allows us to obtain the topological structures corresponding to the respective ones in the input image.We also introduce a correction mechanism for line tracking based on a quadratic surface fitting, which avoidsaccumulating errors of traditional line tracking and improves the robustness for vectorizing rough line drawings.We demonstrate the validity of our method through comparisons with existing methods, and a large amount ofexperiments on both professional and amateurish line drawing images. 本文提出一种基于交叉点分析的线条矢量化方法, 克服了现有方法难以保持拓扑结构的不足。通过中心路径跟踪和轮廓路径跟踪相结合的方式, 准确检测交叉点的出现提出一种基于极坐标亮度映射的交叉点分析方法, 计算交叉点的分支数量和朝向; 利用轮廓路径的弯曲角度判断交叉点相邻分支间的光顺度, 从而获得与原图一致的拓扑结构。",
"title": ""
},
{
"docid": "aa83af152739ac01ba899d186832ee62",
"text": "Predicting user \"ratings\" on items is a crucial task in recommender systems. Matrix factorization methods that computes a low-rank approximation of the incomplete user-item rating matrix provide state-of-the-art performance, especially for users and items with several past ratings (warm starts). However, it is a challenge to generalize such methods to users and items with few or no past ratings (cold starts). Prior work [4][32] have generalized matrix factorization to include both user and item features for performing better regularization of factors as well as provide a model for smooth transition from cold starts to warm starts. However, the features were incorporated via linear regression on factor estimates. In this paper, we generalize this process to allow for arbitrary regression models like decision trees, boosting, LASSO, etc. The key advantage of our approach is the ease of computing --- any new regression procedure can be incorporated by \"plugging\" in a standard regression routine into a few intermediate steps of our model fitting procedure. With this flexibility, one can leverage a large body of work on regression modeling, variable selection, and model interpretation. We demonstrate the usefulness of this generalization using the MovieLens and Yahoo! Buzz datasets.",
"title": ""
},
{
"docid": "7adf46bb0a4ba677e58aee9968d06293",
"text": "BACKGROUND\nWork-family conflict is a type of interrole conflict that occurs as a result of incompatible role pressures from the work and family domains. Work role characteristics that are associated with work demands refer to pressures arising from excessive workload and time pressures. Literature suggests that work demands such as number of hours worked, workload, shift work are positively associated with work-family conflict, which, in turn is related to poor mental health and negative organizational attitudes. The role of social support has been an issue of debate in the literature. This study examined social support both as a moderator and a main effect in the relationship among work demands, work-to-family conflict, and satisfaction with job and life.\n\n\nOBJECTIVES\nThis study examined the extent to which work demands (i.e., work overload, irregular work schedules, long hours of work, and overtime work) were related to work-to-family conflict as well as life and job satisfaction of nurses in Turkey. The role of supervisory support in the relationship among work demands, work-to-family conflict, and satisfaction with job and life was also investigated.\n\n\nDESIGN AND METHODS\nThe sample was comprised of 243 participants: 106 academic nurses (43.6%) and 137 clinical nurses (56.4%). All of the respondents were female. The research instrument was a questionnaire comprising nine parts. The variables were measured under four categories: work demands, work support (i.e., supervisory support), work-to-family conflict and its outcomes (i.e., life and job satisfaction).\n\n\nRESULTS\nThe structural equation modeling results showed that work overload and irregular work schedules were the significant predictors of work-to-family conflict and that work-to-family conflict was associated with lower job and life satisfaction. Moderated multiple regression analyses showed that social support from the supervisor did not moderate the relationships among work demands, work-to-family conflict, and satisfaction with job and life. Exploratory analyses suggested that social support could be best conceptualized as the main effect directly influencing work-to-family conflict and job satisfaction.\n\n\nCONCLUSION\nNurses' psychological well-being and organizational attitudes could be enhanced by rearranging work conditions to reduce excessive workload and irregular work schedule. Also, leadership development programs should be implemented to increase the instrumental and emotional support of the supervisors.",
"title": ""
}
] | scidocsrr |
d8f86e5b201d06d07ec0bf34237f298e | Wnt signaling pathway participates in valproic acid-induced neuronal differentiation of neural stem cells. | [
{
"docid": "a797ab99ed7983bd7372de56d34caca1",
"text": "The discovery of stem cells that can generate neural tissue has raised new possibilities for repairing the nervous system. A rush of papers proclaiming adult stem cell plasticity has fostered the notion that there is essentially one stem cell type that, with the right impetus, can create whatever progeny our heart, liver or other vital organ desires. But studies aimed at understanding the role of stem cells during development have led to a different view — that stem cells are restricted regionally and temporally, and thus not all stem cells are equivalent. Can these views be reconciled?",
"title": ""
}
] | [
{
"docid": "d70a74e37f625f542f8b16e3b0b0e647",
"text": "Word segmentation is the first step of any tasks in Vietnamese language processing. This paper reviews state-of-the-art approaches and systems for word segmentation in Vietnamese. To have an overview of all stages from building corpora to developing toolkits, we discuss building the corpus stage, approaches applied to solve the word segmentation and existing toolkits to segment words in Vietnamese sentences. In addition, this study shows clearly the motivations on building corpus and implementing machine learning techniques to improve the accuracy for Vietnamese word segmentation. According to our observation, this study also reports a few of achievements and limitations in existing Vietnamese word segmentation systems.",
"title": ""
},
{
"docid": "b59a2c49364f3e95a2c030d800d5f9ce",
"text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.",
"title": ""
},
{
"docid": "4a89f20c4b892203be71e3534b32449c",
"text": "This paper draws together knowledge from a variety of fields to propose that innovation management can be viewed as a form of organisational capability. Excellent companies invest and nurture this capability, from which they execute effective innovation processes, leading to innovations in new product, services and processes, and superior business performance results. An extensive review of the literature on innovation management, along with a case study of Cisco Systems, develops a conceptual model of the firm as an innovation engine. This new operating model sees substantial investment in innovation capability as the primary engine for wealth creation, rather than the possession of physical assets. Building on the dynamic capabilities literature, an “innovation capability” construct is proposed with seven elements. These are vision and strategy, harnessing the competence base, organisational intelligence, creativity and idea management, organisational structures and systems, culture and climate, and management of technology.",
"title": ""
},
{
"docid": "381a11fe3d56d5850ec69e2e9427e03f",
"text": "We present an approximation algorithm that takes a pool of pre-trained models as input and produces from it a cascaded model with similar accuracy but lower average-case cost. Applied to state-of-the-art ImageNet classification models, this yields up to a 2x reduction in floating point multiplications, and up to a 6x reduction in average-case memory I/O. The auto-generated cascades exhibit intuitive properties, such as using lower-resolution input for easier images and requiring higher prediction confidence when using a computationally cheaper model.",
"title": ""
},
{
"docid": "b8ce74fc2a02a1a5c2d93e2922529bb0",
"text": "The basic evolution of direct torque control from other drive types is explained. Qualitative comparisons with other drives are included. The basic concepts behind direct torque control are clarified. An explanation of direct self-control and the field orientation concepts implemented in the adaptive motor model block is presented. The reliance of the control method on fast processing techniques is stressed. The theoretical foundations for the control concept are provided in summary format. Information on the ancillary control blocks outside the basic direct torque control is given. The implementation of special functions directly related to the control approach is described. Finally, performance data from an actual system is presented.",
"title": ""
},
{
"docid": "4579075f2afcd26058abb875e71ee4c3",
"text": "So, what comes next? Where are we headed with AI, and what level of responsibility do the designers and providers have with managing AI technology? Will we control AI technology or will it control us? How do we handle the economic Curated by Andrew Boyarsky, MSM, PMP Clinical Associate Professor, and Academic Director of the MS in Enterprise Risk Management, Katz School of Graduate and Professional Studies, Yeshiva University “Every major player is working on this technology of artificial intelligence. As of now, it's benign... but I would say that the day is not far off when artificial intelligence as applied to cyber warfare becomes a threat to everybody.”",
"title": ""
},
{
"docid": "20f2c4ca66ff81fee8092e159bb00d94",
"text": "Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers. The model updates the states of the entities by executing learned action operators. Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives.",
"title": ""
},
{
"docid": "4f400f8e774ebd050ba914011da73514",
"text": "This paper summarizes the method of polyp detection in colonoscopy images and provides preliminary results to participate in ISBI 2015 Grand Challenge on Automatic Polyp Detection in Colonoscopy videos. The key aspect of the proposed method is to learn hierarchical features using convolutional neural network. The features are learned in different scales to provide scale-invariant features through the convolutional neural network, and then each pixel in the colonoscopy image is classified as polyp pixel or non-polyp pixel through fully connected network. The result is refined via smooth filtering and thresholding step. Experimental result shows that the proposed neural network can classify patches of polyp and non-polyp region with an accuracy of about 90%.",
"title": ""
},
{
"docid": "0eb3d3c33b62c04ed5d34fc3a38b5182",
"text": "We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.",
"title": ""
},
{
"docid": "19bcbc9630cf5d8c6d033751ad268a16",
"text": "The economic literature on standards has focused recently on the possibility of market failure with respect to the choice of a standard. In its strongest form, the argument is essentially this: an established standard can persist over a challenger, even where all users prefer a world dominated by the challenger, if users are unable to coordinate their choices. For example, each of us might prefer to have Beta-format videocassette recorders as long as prerecorded Beta tapes continue to be produced, but individually we do not buy Beta machines because we don't think enough others will buy Beta machines to sustain the prerecorded tape supply. I don't buy a Beta format machine because I think that you won't you don't buy one because you think that I won't. In the end, we both turn out to be correct, but we are both worse off than we might have been. This, of course, is a catch-22 that we might suppose to be common in the economy. There will be no cars until there are gas stations there will be no gas stations until there are cars. Without some way out of this conundrum, joyriding can never become a favorite activity of teenagers.1",
"title": ""
},
{
"docid": "efb24b1c6128ecc10b18f35168433b80",
"text": "Automatic vehicle classification is crucial to intelligent transportation system, especially for vehicle-tracking by police. Due to the complex lighting and image capture conditions, image-based vehicle classification in real-world environments is still a challenging task and the performance is far from being satisfactory. However, owing to the mechanism of visual attention, the human vision system shows remarkable capability compared with the computer vision system, especially in distinguishing nuances processing. Inspired by this mechanism, we propose a convolutional neural network (CNN) model of visual attention for image classification. A visual attention-based image processing module is used to highlight one part of an image and weaken the others, generating a focused image. Then the focused image is input into the CNN to be classified. According to the classification probability distribution, we compute the information entropy to guide a reinforcement learning agent to achieve a better policy for image classification to select the key parts of an image. Systematic experiments on a surveillance-nature dataset which contains images captured by surveillance cameras in the front view, demonstrate that the proposed model is more competitive than the large-scale CNN in vehicle classification tasks.",
"title": ""
},
{
"docid": "e43056aad827cd5eea146418aa89ec09",
"text": "The detection and analysis of clusters has become commonplace within geographic information science and has been applied in epidemiology, crime prevention, ecology, demography and other fields. One of the many methods for detecting and analyzing these clusters involves searching the dataset with a flock of boids (bird objects). While boids are effective at searching the dataset once their behaviors are properly configured, it can be difficult to find the proper configuration. Since genetic algorithms have been successfully used to configure neural networks, they may also be useful for configuring parameters guiding boid behaviors. In this paper, we develop a genetic algorithm to evolve the ideal boid behaviors. Preliminary results indicate that, even though the genetic algorithm does not return the same configuration each time, it does converge on configurations that improve over the parameters used when boids were initially proposed for geographic cluster detection. Also, once configured, the boids perform as well as other cluster detection methods. Continued work with this system could determine which parameters have a greater effect on the results of the boid system and could also discover rules for configuring a flock of boids directly from properties of the dataset, such as point density, rather than requiring the time-consuming process of optimizing the parameters for each new dataset.",
"title": ""
},
{
"docid": "96db04b5f86b137328b21471fca221d0",
"text": "Web frameworks involve many aspects, e.g., forms, model, testing, and migration. Developers differ in terms of their per-aspect experience. We describe a methodology for the identification of relevant aspects of a web app framework, measurement of experience atoms per developer and per aspect based on the commit history of actual projects, and the compilation of developer profiles for summarizing the relevance of different aspects and the developers’ contribution to the project. Measurement relies on a rule-based language. Our case study is concerned with the Pythonbased Django web app framework and the open source Django-Oscar project from which experience atoms were extracted.",
"title": ""
},
{
"docid": "d1ec971608eda914e74f9ffc181c9b9f",
"text": "The steady increase in photovoltaic (PV) installations calls for new and better control methods in respect to the utility grid connection. Limiting the harmonic distortion is essential to the power quality, but other requirements also contribute to a more safe grid-operation, especially in dispersed power generation networks. For instance, the knowledge of the utility impedance at the fundamental frequency can be used to detect a utility failure. A PV-inverter with this feature can anticipate a possible network problem and decouple it in time. This paper describes the digital implementation of a PV-inverter with different advanced, robust control strategies and an embedded online technique to determine the utility grid impedance. By injecting an interharmonic current and measuring the voltage response it is possible to estimate the grid impedance at the fundamental frequency. The presented technique, which is implemented with the existing sensors and the CPU of the PV-inverter, provides a fast and low cost approach for online impedance measurement, which may be used for detection of islanding operation. Practical tests on an existing PV-inverter validate the control methods, the impedance measurement, and the islanding detection.",
"title": ""
},
{
"docid": "27bcbde431c340db7544b58faa597fb7",
"text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.",
"title": ""
},
{
"docid": "2c48e9908078ca192ff191121ce90e21",
"text": "In the hierarchy of data, information and knowledge, computational methods play a major role in the initial processing of data to extract information, but they alone become less effective to compile knowledge from information. The Kyoto Encyclopedia of Genes and Genomes (KEGG) resource (http://www.kegg.jp/ or http://www.genome.jp/kegg/) has been developed as a reference knowledge base to assist this latter process. In particular, the KEGG pathway maps are widely used for biological interpretation of genome sequences and other high-throughput data. The link from genomes to pathways is made through the KEGG Orthology system, a collection of manually defined ortholog groups identified by K numbers. To better automate this interpretation process the KEGG modules defined by Boolean expressions of K numbers have been expanded and improved. Once genes in a genome are annotated with K numbers, the KEGG modules can be computationally evaluated revealing metabolic capacities and other phenotypic features. The reaction modules, which represent chemical units of reactions, have been used to analyze design principles of metabolic networks and also to improve the definition of K numbers and associated annotations. For translational bioinformatics, the KEGG MEDICUS resource has been developed by integrating drug labels (package inserts) used in society.",
"title": ""
},
{
"docid": "a60128a5b5616added12f62e801671f0",
"text": "Research shows that many organizations overlook needs and opportunities to strengthen ethics. Barriers can make it hard to see the need for stronger ethics and even harder to take effective action. These barriers include the organization's misleading use of language, misuse of an ethics code, culture of silence, strategies of justification, institutional betrayal, and ethical fallacies. Ethics placebos tend to take the place of steps to see, solve, and prevent problems. This article reviews relevant research and specific steps that create change.",
"title": ""
},
{
"docid": "1977e7813b15ffb3a4238f3ed40f0e1f",
"text": "Despite the existence of standard protocol, many stabilization centers (SCs) continue to experience high mortality of children receiving treatment for severe acute malnutrition. Assessing treatment outcomes and identifying predictors may help to overcome this problem. Therefore, a 30-month retrospective cohort study was conducted among 545 randomly selected medical records of children <5 years of age admitted to SCs in Gedeo Zone. Data was entered by Epi Info version 7 and analyzed by STATA version 11. Cox proportional hazards model was built by forward stepwise procedure and compared by the likelihood ratio test and Harrell's concordance, and fitness was checked by Cox-Snell residual plot. During follow-up, 51 (9.3%) children had died, and 414 (76%) and 26 (4.8%) children had recovered and defaulted (missed follow-up for 2 consecutive days), respectively. The survival rates at the end of the first, second and third weeks were 95.3%, 90% and 85%, respectively, and the overall mean survival time was 79.6 days. Age <24 months (adjusted hazard ratio [AHR] =2.841, 95% confidence interval [CI] =1.101-7.329), altered pulse rate (AHR =3.926, 95% CI =1.579-9.763), altered temperature (AHR =7.173, 95% CI =3.05-16.867), shock (AHR =3.805, 95% CI =1.829-7.919), anemia (AHR =2.618, 95% CI =1.148-5.97), nasogastric tube feeding (AHR =3.181, 95% CI =1.18-8.575), hypoglycemia (AHR =2.74, 95% CI =1.279-5.87) and treatment at hospital stabilization center (AHR =4.772, 95% CI =1.638-13.9) were independent predictors of mortality. The treatment outcomes and incidence of death were in the acceptable ranges of national and international standards. Intervention to further reduce deaths has to focus on young children with comorbidities and altered general conditions.",
"title": ""
},
{
"docid": "061face2272a6c5a31c6fca850790930",
"text": "Antibiotic feeding studies were conducted on the firebrat,Thermobia domestica (Zygentoma, Lepismatidae) to determine if the insect's gut cellulases were of insect or microbial origin. Firebrats were fed diets containing either nystatin, metronidazole, streptomycin, tetracycline, or an antibiotic cocktail consisting of all four antibiotics, and then their gut microbial populations and gut cellulase levels were monitored and compared with the gut microbial populations and gut cellulase levels in firebrats feeding on antibiotic-free diets. Each antibiotic significantly reduced the firebrat's gut micro-flora. Nystatin reduced the firebrat's viable gut fungi by 89%. Tetracycline and the antibiotic cocktail reduced the firebrat's viable gut bacteria by 81% and 67%, respectively, and metronidazole, streptomycin, tetracycline, and the antibiotic cocktail reduced the firebrat's total gut flora by 35%, 32%, 55%, and 64%, respectively. Although antibiotics significantly reduced the firebrat's viable and total gut flora, gut cellulase levels in firebrats fed antibiotics were not significantly different from those in firebrats on an antibiotic-free diet. Furthermore, microbial populations in the firebrat's gut decreased significantly over time, even in firebrats feeding on the antibiotic-free diet, without corresponding decreases in gut cellulase levels. Based on this evidence, we conclude that the gut cellulases of firebrats are of insect origin. This conclusion implies that symbiont-independent cellulose digestion is a primitive trait in insects and that symbiont-mediated cellulose digestion is a derived condition.",
"title": ""
}
] | scidocsrr |
d974f0ac54eac41eea44af4ed4e6d433 | Incorporation of Application Layer Protocol Syntax into Anomaly Detection | [
{
"docid": "eaf0693dd5447d58d04e10aef02ef331",
"text": "A key step in the semantic analysis of network traffic is to parse the traffic stream according to the high-level protocols it contains. This process transforms raw bytes into structured, typed, and semantically meaningful data fields that provide a high-level representation of the traffic. However, constructing protocol parsers by hand is a tedious and error-prone affair due to the complexity and sheer number of application protocols.This paper presents binpac, a declarative language and compiler designed to simplify the task of constructing robust and efficient semantic analyzers for complex network protocols. We discuss the design of the binpac language and a range of issues in generating efficient parsers from high-level specifications. We have used binpac to build several protocol parsers for the \"Bro\" network intrusion detection system, replacing some of its existing analyzers (handcrafted in C++), and supplementing its operation with analyzers for new protocols. We can then use Bro's powerful scripting language to express application-level analysis of network traffic in high-level terms that are both concise and expressive. binpac is now part of the open-source Bro distribution.",
"title": ""
},
{
"docid": "49974a648adfa0ebe8f8a3ecc3454e7f",
"text": "Traditional intrusion detection systems (IDS) detect attacks by comparing current behavior to signatures of known attacks. One main drawback is the inability of detecting new attacks which do not have known signatures. In this paper we propose a learning algorithm that constructs models of normal behavior from attack-free network traffic. Behavior that deviates from the learned normal model signals possible novel attacks. Our IDS is unique in two respects. First, it is nonstationary, modeling probabilities based on the time since the last event rather than on average rate. This prevents alarm floods. Second, the IDS learns protocol vocabularies (at the data link through application layers) in order to detect unknown attacks that attempt to exploit implementation errors in poorly tested features of the target software. On the 1999 DARPA IDS evaluation data set [9], we detect 70 of 180 attacks (with 100 false alarms), about evenly divided between user behavioral anomalies (IP addresses and ports, as modeled by most other systems) and protocol anomalies. Because our methods are unconventional there is a significant non-overlap of our IDS with the original DARPA participants, which implies that they could be combined to increase coverage.",
"title": ""
}
] | [
{
"docid": "776ac205768b4ab9067570d85ae4eac6",
"text": "This paper advances an \"information goods\" theory that explains prestige processes as an emergent product of psychological adaptations that evolved to improve the quality of information acquired via cultural transmission. Natural selection favored social learners who could evaluate potential models and copy the most successful among them. In order to improve the fidelity and comprehensiveness of such ranked-biased copying, social learners further evolved dispositions to sycophantically ingratiate themselves with their chosen models, so as to gain close proximity to, and prolonged interaction with, these models. Once common, these dispositions created, at the group level, distributions of deference that new entrants may adaptively exploit to decide who to begin copying. This generated a preference for models who seem generally \"popular.\" Building on social exchange theories, we argue that a wider range of phenomena associated with prestige processes can more plausibly be explained by this simple theory than by others, and we test its predictions with data from throughout the social sciences. In addition, we distinguish carefully between dominance (force or force threat) and prestige (freely conferred deference).",
"title": ""
},
{
"docid": "3b413d4aeffdd7f78d31e67d142a72df",
"text": "Light fidelity (LiFi) uses light emitting diodes (LEDs) for high-speed wireless communications. Since an LED lamp covers a small area, a LiFi system with multiple access points (APs) can offer a significantly high spatial throughput. However, the spatial distribution of data rates achieved by LiFi fluctuates because users experience inter-cell interference from neighboring LiFi APs. In order to guarantee a quality of service (QoS) for all users in the network, an RF network is considered as an additional wireless networking layer. This hybrid LiFi/RF network enables users with low levels of optical signals to achieve the desired QoS by migrating to the RF network. With regard to moving users, the hybrid LiFi/RF system dynamically allocates either a LiFi AP or an RF AP to users based on their channel state information. In this paper, a dynamic load balancing scheme is proposed, which considers the handover overhead in order to improve the overall system throughput. Joint optimization algorithm (JOA) and separate optimization algorithm (SOA), which jointly and separately optimize the AP assignment and resource allocation, respectively, are proposed. Simulation results show that SOA can offer a better performance/complexity tradeoff than JOA for system load balancing.",
"title": ""
},
{
"docid": "a43e6a21760df2420e2016a69e6be1f0",
"text": "Palmar creases and dermal ridge patterns of 34 patients with alcohol embryopathy are compared with 470 healthy individuals. In alcohol embryopathy several typical deviations were noted. Palmar Creases. The interdigital part of the distal palmar crease is generally sharply bent, the proximal transverse crease is hypoplastic or missing, the thenar crease is commonly well marked. Simian creases and bridged palmar creases are more common in patients with alcohol embryopathy than in healthy individuals. Ridge Patterns of the Palm. The main line D coming from triradius d in patients with alcohol embryopathy mostly shows a low type of ending in the fourth interdigital area; in this area loops are twice as common as in healthy individuals. Patterns of the Fingertips. No deviations were noted in the distribution of whorls and loops, but virtually no arches were observed in patients with alcohol embryopathy. These anomalies suggest embryonic damage in the twelvth week of gestation.",
"title": ""
},
{
"docid": "0190bdc5eafae72620f7fabbcdcc223c",
"text": "Breast cancer is regarded as one of the most frequent mortality causes among women. As early detection of breast cancer increases the survival chance, creation of a system to diagnose suspicious masses in mammograms is important. In this paper, two automated methods are presented to diagnose mass types of benign and malignant in mammograms. In the first proposed method, segmentation is done using an automated region growing whose threshold is obtained by a trained artificial neural network (ANN). In the second proposed method, segmentation is performed by a cellular neural network (CNN) whose parameters are determined by a genetic algorithm (GA). Intensity, textural, and shape features are extracted from segmented tumors. GA is used to select appropriate features from the set of extracted features. In the next stage, ANNs are used to classify the mammograms as benign or malignant. To evaluate the performance of the proposed methods different classifiers (such as random forest, naïve Bayes, SVM, and KNN) are used. Results of the proposed techniques performed on MIAS and DDSM databases are promising. The obtained sensitivity, specificity, and accuracy rates are 96.87%, 95.94%, and 96.47%, respectively. 2014 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "3c4e0d260fd25206685501e53e319848",
"text": "Maze solving using multiple algorithms is one of the important problems in the last years, maze solving problem is to determine the path of a mobile robot from its source position to a destination position through a workspace populated with obstacles, in addition to finding the shortest path among the solutions. Autonomous robotics is a field with wide-reaching applications, from bomb-sniffing robots to autonomous devices for finding humans in wreckage to home automation; many people are interested in low-power, high speed, reliable solutions. we will introduce an algorithm to solve mazes and avoid robots to go through the long process, this algorithm is based on image processing and shortest path algorithm, the algorithm will work efficiently because of the preprocessing on maze's image data rather than going through the maze cell by cell. Therefore, it focuses on the entire maze rather than the current part that an agent is in it. This approach will give the robot preplanning time before falling in mistakes like loops and being trapped in a local minimum of dead ends of labyrinth besides saving time, because the entire maze map will be known before the navigation starts.",
"title": ""
},
{
"docid": "8efd8be0df5f8d8e0e16782b65cd73e6",
"text": "Named entity recognition (NER) is used in many domains beyond the newswire text that comprises current gold-standard corpora. Recent work has used Wikipedia’s link structure to automatically generate near gold-standard annotations. Until now, these resources have only been evaluated on newswire corpora or themselves. We present the first NER evaluation on a Wikipedia gold standard (WG) corpus. Our analysis of cross-corpus performance on WG shows that Wikipedia text may be a harder NER domain than newswire. We find that an automatic annotation of Wikipedia has high agreement with WG and, when used as training data, outperforms newswire models by up to 7.7%.",
"title": ""
},
{
"docid": "a01333e16abb503cf6d26c54ac24d473",
"text": "Topic models could have a huge impact on improving the ways users find and discover content in digital libraries and search interfaces through their ability to automatically learn and apply subject tags to each and every item in a collection, and their ability to dynamically create virtual collections on the fly. However, much remains to be done to tap this potential, and empirically evaluate the true value of a given topic model to humans. In this work, we sketch out some sub-tasks that we suggest pave the way towards this goal, and present methods for assessing the coherence and interpretability of topics learned by topic models. Our large-scale user study includes over 70 human subjects evaluating and scoring almost 500 topics learned from collections from a wide range of genres and domains. We show how scoring model -- based on pointwise mutual information of word-pair using Wikipedia, Google and MEDLINE as external data sources - performs well at predicting human scores. This automated scoring of topics is an important first step to integrating topic modeling into digital libraries",
"title": ""
},
{
"docid": "09b35c40a65a0c2c0f58deb49555000d",
"text": "There are a wide range of forensic and analysis tools to examine digital evidence in existence today. Traditional tool design examines each source of digital evidence as a BLOB (binary large object) and it is up to the examiner to identify the relevant items from evidence. In the face of rapid technological advancements we are increasingly confronted with a diverse set of digital evidence and being able to identify a particular tool for conducting a specific analysis is an essential task. In this paper, we present a systematic study of contemporary forensic and analysis tools using a hypothesis based review to identify the different functionalities supported by these tools. We highlight the limitations of the forensic tools in regards to evidence corroboration and develop a case for building evidence correlation functionalities into these tools.",
"title": ""
},
{
"docid": "1bea3fdeb0ca47045a64771bd3925e11",
"text": "The goal of Word Sense Disambiguation (WSD) is to identify the correct meaning of a word in the particular context. Traditional supervised methods only use labeled data (context), while missing rich lexical knowledge such as the gloss which defines the meaning of a word sense. Recent studies have shown that incorporating glosses into neural networks for WSD has made significant improvement. However, the previous models usually build the context representation and gloss representation separately. In this paper, we find that the learning for the context and gloss representation can benefit from each other. Gloss can help to highlight the important words in the context, thus building a better context representation. Context can also help to locate the key words in the gloss of the correct word sense. Therefore, we introduce a co-attention mechanism to generate co-dependent representations for the context and gloss. Furthermore, in order to capture both word-level and sentence-level information, we extend the attention mechanism in a hierarchical fashion. Experimental results show that our model achieves the state-of-the-art results on several standard English all-words WSD test datasets.",
"title": ""
},
{
"docid": "8770cfba83e16454e5d7244201d47628",
"text": "Representing documents is a crucial component in many NLP tasks, for instance predicting aspect ratings in reviews. Previous methods for this task treat documents globally, and do not acknowledge that target categories are often assigned by their authors with generally no indication of the specific sentences that motivate them. To address this issue, we adopt a weakly supervised learning model, which jointly learns to focus on relevant parts of a document according to the context along with a classifier for the target categories. Derived from the weighted multiple-instance regression (MIR) framework, the model learns decomposable document vectors for each individual category and thus overcomes the representational bottleneck in previous methods due to a fixed-length document vector. During prediction, the estimated relevance or saliency weights explicitly capture the contribution of each sentence to the predicted rating, thus offering an explanation of the rating. Our model achieves state-of-the-art performance on multi-aspect sentiment analysis, improving over several baselines. Moreover, the predicted saliency weights are close to human estimates obtained by crowdsourcing, and increase the performance of lexical and topical features for review segmentation and summarization.",
"title": ""
},
{
"docid": "98a23af0d7a670cc3e22851ac136e9e7",
"text": "Internet of Things (IOT) aims at interfacing different gadgets to the internet web – encouraging human-machine and machine-machine connections offering superior security, console and effectiveness. The concept of IOT is utilized in this model, remote monitoring of energy meter which is intended to overcome the issues in existing Automatic Meter Reading (AMR) system. It spares tremendous human work. A controller integrated with electronic energy meter assist in distant correspondence from the developed android application. This application enables monitoring of bill generation at consumer premises without human intervention and also in visualizing live data consumption and sight energy expended points of interest on daily/monthly basis. In addition, it gives authority to power organizations to seize lenient customers who have extraordinary dues for remote disconnection of the power supply. So IOT based remote AMR framework is more viable methodology than tradition of billing framework. General Terms Central office",
"title": ""
},
{
"docid": "cf875189a6cc55d6f669f909f7a020d6",
"text": "Unfortunately, better throughputs are continuously requested mobile networks face spectral resource scarcity. Consequently, spectrum resource's use must be optimized for that operators have introduced frequency reuse mechanism. Unfortunately, this mechanism is limited by co-channel interference and adjacent channel interference. An Orthogonal Frequency Division Multiple Access (OFDMA) has been introduced which was also better enhanced by femto cells concept. From this work it can be deduced that the Signal to co-channel interference ratio power does not exceed 14.1 dB for a user in edge cell whereas it can achieved 15.3 dB for a user in center cell with the same path loss value (γ=3.5). The signal to adjacent channel interference power ratio decreases from 20.7 w to 6.9 w if the number of active users in the neighbor interfering cell is increased from 10 to 30. However, from the fourth generation's study, we deduce that macro user's performances are reversely varied with femto cells number. In fact, SINR decreases from 10 dB to −5 dB if we increase the number of femto cells from 1 to 7.",
"title": ""
},
{
"docid": "28cc4da43f3a668d9f2d1797867e2244",
"text": "Brain activity is associated with changes in optical properties of brain tissue. Optical measurements during brain activation can assess haemoglobin oxygenation, cytochrome-c-oxidase redox state, and two types of changes in light scattering reflecting either membrane potential (fast signal) or cell swelling (slow signal), respectively. In previous studies of exposed brain tissue, optical imaging of brain activity has been achieved at high temporal and microscopical spatial resolution. Now, using near-infrared light that can penetrate biological tissue reasonably well, it has become possible to assess brain activity in human subjects through the intact skull non-invasively. After early studies employing single-site near-infrared spectroscopy, first near-infrared imaging devices are being applied successfully for low-resolution functional brain imaging. Advantages of the optical methods include biochemical specificity, a temporal resolution in the millisecond range, the potential of measuring intracellular and intravascular events simultaneously and the portability of the devices enabling bedside examinations.",
"title": ""
},
{
"docid": "f56f2119b3e65970db35676fe1cac9ba",
"text": "While behavioral and social sciences occupations comprise one of the largest portions of the \"STEM\" workforce, most studies of diversity in STEM overlook this population, focusing instead on fields such as biomedical or physical sciences. This study evaluates major demographic trends and productivity in the behavioral and social sciences research (BSSR) workforce in the United States during the past decade. Our analysis shows that the demographic trends for different BSSR fields vary. In terms of gender balance, there is no single trend across all BSSR fields; rather, the problems are field-specific, and disciplines such as economics and political science continue to have more men than women. We also show that all BSSR fields suffer from a lack of racial and ethnic diversity. The BSSR workforce is, in fact, less representative of racial and ethnic minorities than are biomedical sciences or engineering. Moreover, in many BSSR subfields, minorities are less likely to receive funding. We point to various funding distribution patterns across different demographic groups of BSSR scientists, and discuss several policy implications.",
"title": ""
},
{
"docid": "24902498d03f15aa63110e8cd4ee8a83",
"text": "Precision robotic pollination systems can not only fill the gap of declining natural pollinators, but can also surpass them in efficiency and uniformity, helping to feed the fast-growing human population on Earth. This paper presents the design and ongoing development of an autonomous robot named “BrambleBee”, which aims at pollinating bramble plants in a greenhouse environment. Partially inspired by the ecology and behavior of bees, BrambleBee employs state-of-the-art localization and mapping, visual perception, path planning, motion control, and manipulation techniques to create an efficient and robust autonomous pollination system.",
"title": ""
},
{
"docid": "a208187fc81a633ac9332ee11567b1a7",
"text": "Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain-machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin-Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.",
"title": ""
},
{
"docid": "fbe8379aa9af67d746df0c2335f3675a",
"text": "The large volume of data produced by the increasingly deployed Internet of Things (IoT), is shifting security priorities to consider data access control from a data-centric perspective. To secure the IoT, it becomes essential to implement a data access control solution that offers the necessary flexibility required to manage a large number of IoT devices. The concept of Ciphertext-Policy Attribute-based Encryption (CP-ABE) fulfills such requirement. It allows the data source to encrypt data while cryptographically enforcing a security access policy, whereby only authorized data users with the desired attributes are able to decrypt data. Yet, despite these manifest advantages; CP-ABE has not been designed taking into consideration energy efficiency. Many IoT devices, like sensors and actuators, cannot be part of CP-ABE enforcement points, because of their resource limitations in terms of CPU, memory, battery, etc. In this paper, we propose to extend the basic CP-ABE scheme using effective pre-computation techniques. We will experimentally compute the energy saving potential offered by the proposed variant of CP-ABE, and thus demonstrate the applicability of CP-ABE in the IoT.",
"title": ""
},
{
"docid": "16d6f45b88e576ae825e20fd8cde203c",
"text": "With the advent of Internet, people actively express their opinions about products, services, events, political parties, etc., in social media, blogs, and website comments. The amount of research work on sentiment analysis is growing explosively. However, the majority of research efforts are devoted to English-language data, while a great share of information is available in other languages. We present a state-of-the-art review on multilingual sentiment analysis. More importantly, we compare our own implementation of existing approaches on common data. Precision observed in our experiments is typically lower than the one reported by the original authors, which we attribute to the lack of detail in the original presentation of those approaches. Thus, we compare the existing works by what they really offer to the reader, including whether they allow for accurate implementation and for reliable reproduction of the reported results.",
"title": ""
},
{
"docid": "139ecd9ff223facaec69ad6532f650db",
"text": "Student retention in open and distance learning (ODL) is comparatively poor to traditional education and, in some contexts, embarrassingly low. Literature on the subject of student retention in ODL indicates that even when interventions are designed and undertaken to improve student retention, they tend to fall short. Moreover, this area has not been well researched. The main aim of our research, therefore, is to better understand and measure students’ attitudes and perceptions towards the effectiveness of mobile learning. Our hope is to determine how this technology can be optimally used to improve student retention at Bachelor of Science programmes at Indira Gandhi National Open University (IGNOU) in India. For our research, we used a survey. Results of this survey clearly indicate that offering mobile learning could be one method improving retention of BSc students, by enhancing their teaching/ learning and improving the efficacy of IGNOU’s existing student support system. The biggest advantage of this technology is that it can be used anywhere, anytime. Moreover, as mobile phone usage in India explodes, it offers IGNOU easy access to a larger number of learners. This study is intended to help inform those who are seeking to adopt mobile learning systems with the aim of improving communication and enriching students’ learning experiences in their ODL institutions.",
"title": ""
},
{
"docid": "acefbbb42607f2d478a16448644bd6e6",
"text": "The time complexity of incremental structure from motion (SfM) is often known as O(n^4) with respect to the number of cameras. As bundle adjustment (BA) being significantly improved recently by preconditioned conjugate gradient (PCG), it is worth revisiting how fast incremental SfM is. We introduce a novel BA strategy that provides good balance between speed and accuracy. Through algorithm analysis and extensive experiments, we show that incremental SfM requires only O(n) time on many major steps including BA. Our method maintains high accuracy by regularly re-triangulating the feature matches that initially fail to triangulate. We test our algorithm on large photo collections and long video sequences with various settings, and show that our method offers state of the art performance for large-scale reconstructions. The presented algorithm is available as part of VisualSFM at http://homes.cs.washington.edu/~ccwu/vsfm/.",
"title": ""
}
] | scidocsrr |
55e2362d012d58ae90a1a987246593b3 | Device Mismatch: An Analog Design Perspective | [
{
"docid": "df374fcdaf0b7cd41ca5ef5932378655",
"text": "This paper is concerned with the design of precision MOS anafog circuits. Section ff of the paper discusses the characterization and modeling of mismatch in MOS transistors. A characterization methodology is presented that accurately predicts the mismatch in drain current over a wide operating range using a minimumset of measured data. The physical causes of mismatch are discussed in detail for both pand n-channel devices. Statistieal methods are used to develop analytical models that relate the mismatchto the devicedimensions.It is shownthat these models are valid for smafl-geometrydevices also. Extensive experimental data from a 3-pm CMOS process are used to verify these models. Section 111of the paper demonstrates the applicationof the transistor matching studies to the design of a high-performance digital-to-analog converter (DAC). A circuit designmethodologyis presented that highfights the close interaction between the circuit yield and the matching accuracy of devices. It has been possibleto achievea circuit yieldof greater than 97 percent as a result of the knowledgegenerated regarding the matching behavior of transistors and due to the systematicdesignapproach.",
"title": ""
}
] | [
{
"docid": "79560f7ec3c5f42fe5c5e0ad175fe6a0",
"text": "The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges. In particular, for ANN-enabled self-driving vehicles it is important to establish properties about the resilience of ANNs to noisy or even maliciously manipulated sensory input. We are addressing these challenges by defining resilience properties of ANN-based classifiers as the maximum amount of input or sensor perturbation which is still tolerated. This problem of computing maximum perturbation bounds for ANNs is then reduced to solving mixed integer optimization problems (MIP). A number of MIP encoding heuristics are developed for drastically reducing MIP-solver runtimes, and using parallelization of MIP-solvers results in an almost linear speed-up in the number (up to a certain limit) of computing cores in our experiments. We demonstrate the effectiveness and scalability of our approach by means of computing maximum resilience bounds for a number of ANN benchmark sets ranging from typical image recognition scenarios to the autonomous maneuvering of robots.",
"title": ""
},
{
"docid": "8d6171dbe50a25873bd435ad25e48ae9",
"text": "An automatic landing system is required on a long-range drone because the position of the vehicle cannot be reached visually by the pilot. The autopilot system must be able to correct the drone movement dynamically in accordance with its flying altitude. The current article describes autopilot system on an H-Octocopter drone using image processing and complementary filter. This paper proposes a new approach to reduce oscillations during the landing phase on a big drone. The drone flies above 10 meters to a provided coordinate using GPS data, to check for the existence of the landing area. This process is done visually using the camera. PID controller is used to correct the movement by calculate error distance detected by camera. The controller also includes altitude parameters on its calculations through a complementary filter. The controller output is the PWM signals which control the movement and altitude of the vehicle. The signal then transferred to Flight Controller through serial communication, so that, the drone able to correct its movement. From the experiments, the accuracy is around 0.56 meters and it can be done in 18 seconds.",
"title": ""
},
{
"docid": "86318b52b1bdf0dcf64a2d067645237b",
"text": "Neurons that fire high-frequency bursts of spikes are found in various sensory systems. Although the functional implications of burst firing might differ from system to system, bursts are often thought to represent a distinct mode of neuronal signalling. The firing of bursts in response to sensory input relies on intrinsic cellular mechanisms that work with feedback from higher centres to control the discharge properties of these cells. Recent work sheds light on the information that is conveyed by bursts about sensory stimuli, on the cellular mechanisms that underlie bursting, and on how feedback can control the firing mode of burst-capable neurons, depending on the behavioural context. These results provide strong evidence that bursts have a distinct function in sensory information transmission.",
"title": ""
},
{
"docid": "26b67fe7ee89c941d313187672b1d514",
"text": "Since permanent magnet linear synchronous motor (PMLSM) has a bright future in electromagnetic launch (EML), moving-magnet PMLSM with multisegment primary is a potential choice. To overcome the end effect in the junctions of armature units, three different ring windings are proposed for the multisegment primary of PMLSM: slotted ring windings, slotless ring windings, and quasi-sinusoidal ring windings. They are designed for various demands of EML, regarding the load levels and force fluctuations. Auxiliary iron yokes are designed to reduce the mover weights, and also help restrain the end effect. PMLSM with slotted ring windings has a higher thrust for heavy load EML. PMLSM with slotless ring windings eliminates the cogging effect, while PMLSM with quasi-sinusoidal ring windings has very low thrust ripple; they aim to launch the light aircraft and run smooth. Structure designs of these motors are introduced; motor models and parameter optimizations are accomplished by finite-element method (FEM). Then, performance advantages of the proposed motors are investigated by comparisons of common PMLSMs. At last, the prototypes are manufactured and tested to validate the feasibilities of ring winding motors with auxiliary iron yokes. The results prove that the proposed motors can effectively satisfy the requirements of EML.",
"title": ""
},
{
"docid": "613f0bf05fb9467facd2e58b70d2b09e",
"text": "The gold standard for improving sensory, motor and or cognitive abilities is long-term training and practicing. Recent work, however, suggests that intensive training may not be necessary. Improved performance can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulation. Such training-independent sensory learning (TISL), which has been intensively studied in the somatosensory system, induces in humans lasting changes in perception and neural processing, without any explicit task training. It has been suggested that the effectiveness of this form of learning stems from the fact that the stimulation protocols used are optimized to alter synaptic transmission and efficacy. TISL provides novel ways to investigate in humans the relation between learning processes and underlying cellular and molecular mechanisms, and to explore alternative strategies for intervention and therapy.",
"title": ""
},
{
"docid": "6a4a76e48ff8bfa9ad17f116c3258d49",
"text": "Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.",
"title": ""
},
{
"docid": "1ffef8248a0cc0b69a436c4d949ed221",
"text": "This paper presents preliminary research on a new decision making tool that integrates financial and non-financial performance measures in project portfolio management via the Triple Bottom Line (TBL) and uses the Analytic Hierarchy Process (AHP) as a decision support model. This new tool evaluates and prioritizes a set of projects and creates a balanced project portfolio based upon the perspectives and priorities of decision makers. It can assist decision makers with developing and making proactive decisions which support the strategy of their organization with respect to financial, environmental and social issues, ensuring the sustainability of their organization in the future.",
"title": ""
},
{
"docid": "fd8b7b9f4469bd253ee66f6c464691a6",
"text": "The \"flipped classroom\" is a learning model in which content attainment is shifted forward to outside of class, then followed by instructor-facilitated concept application activities in class. Current studies on the flipped model are limited. Our goal was to provide quantitative and controlled data about the effectiveness of this model. Using a quasi-experimental design, we compared an active nonflipped classroom with an active flipped classroom, both using the 5-E learning cycle, in an effort to vary only the role of the instructor and control for as many of the other potentially influential variables as possible. Results showed that both low-level and deep conceptual learning were equivalent between the conditions. Attitudinal data revealed equal student satisfaction with the course. Interestingly, both treatments ranked their contact time with the instructor as more influential to their learning than what they did at home. We conclude that the flipped classroom does not result in higher learning gains or better attitudes compared with the nonflipped classroom when both utilize an active-learning, constructivist approach and propose that learning gains in either condition are most likely a result of the active-learning style of instruction rather than the order in which the instructor participated in the learning process.",
"title": ""
},
{
"docid": "04e478610728f0aae76e5299c28da25a",
"text": "Single image super resolution is one of the most important topic in computer vision and image processing research, many convolutional neural networks (CNN) based super resolution algorithms were proposed and achieved advanced performance, especially in recovering image details, in which PixelCNN is the most representative one. However, due to the intensive computation requirement of PixelCNN model, running time remains a major challenge, which limited its wider application. In this paper, several modifications are proposed to improve PixelCNN based recursive super resolution model. First, a discrete logistic mixture likelihood is adopted, then a cache structure for generating process is proposed, with these modifications, numerous redundant computations are removed without loss of accuracy. Finally, a partial generating network is proposed for higher resolution generation. Experiments on CelebA dataset demonstrate the effectiveness the superiority of the proposed method.",
"title": ""
},
{
"docid": "0d7c29b40f92b5997791f1bbe192269c",
"text": "We present a general approach to video understanding, inspired by semantic transfer techniques that have been successfully used for 2D image analysis. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics – natural language captions or other labels – depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video. We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of the selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC’16 benchmark, video summarization on the SumMe and TV-Sum benchmarks, Temporal Action Detection on the Thumos2014 benchmark, and sound prediction on the Greatest Hits benchmark. Our method not only surpasses the state of the art, in four out of five benchmarks, but importantly, it is the only single method we know of that was successfully applied to such a diverse range of tasks.",
"title": ""
},
{
"docid": "47f2a5a61677330fc85ff6ac700ac39f",
"text": "We present CHALET, a 3D house simulator with support for navigation and manipulation. CHALET includes 58 rooms and 10 house configuration, and allows to easily create new house and room layouts. CHALET supports a range of common household activities, including moving objects, toggling appliances, and placing objects inside closeable containers. The environment and actions available are designed to create a challenging domain to train and evaluate autonomous agents, including for tasks that combine language, vision, and planning in a dynamic environment.",
"title": ""
},
{
"docid": "ffe6edef11daef1db0c4aac77bed7a23",
"text": "MPI is a well-established technology that is used widely in high-performance computing environment. However, setting up an MPI cluster can be challenging and time-consuming. This paper tackles this challenge by using modern containerization technology, which is Docker, and container orchestration technology, which is Docker Swarm mode, to automate the MPI cluster setup and deployment. We created a ready-to-use solution for developing and deploying MPI programs in a cluster of Docker containers running on multiple machines, orchestrated with Docker Swarm mode, to perform high computation tasks. We explain the considerations when creating Docker image that will be instantiated as MPI nodes, and we describe the steps needed to set up a fully connected MPI cluster as Docker containers running in a Docker Swarm mode. Our goal is to give the rationale behind our solution so that others can adapt to different system requirements. All pre-built Docker images, source code, documentation, and screencasts are publicly available.",
"title": ""
},
{
"docid": "b6bbd83da68fbf1d964503fb611a2be5",
"text": "Battery systems are affected by many factors, the most important one is the cells unbalancing. Without the balancing system, the individual cell voltages will differ over time, battery pack capacity will decrease quickly. That will result in the fail of the total battery system. Thus cell balancing acts an important role on the battery life preserving. Different cell balancing methodologies have been proposed for battery pack. This paper presents a review and comparisons between the different proposed balancing topologies for battery string based on MATLAB/Simulink® simulation. The comparison carried out according to circuit design, balancing simulation, practical implementations, application, balancing speed, complexity, cost, size, balancing system efficiency, voltage/current stress … etc.",
"title": ""
},
{
"docid": "4028f1cd20127f3c6599e6073bb1974b",
"text": "This paper presents a power delivery monitor (PDM) peripheral integrated in a flip-chip packaged 28 nm system-on-chip (SoC) for mobile computing. The PDM is composed entirely of digital standard cells and consists of: 1) a fully integrated VCO-based digital sampling oscilloscope; 2) a synthetic current load; and 3) an event engine for triggering, analysis, and debug. Incorporated inside an SoC, it enables rapid, automated analysis of supply impedance, as well as monitoring supply voltage droop of multi-core CPUs running full software workloads and during scan-test operations. To demonstrate these capabilities, we describe a power integrity case study of a dual-core ARM Cortex-A57 cluster in a commercial 28 nm mobile SoC. Measurements are presented of power delivery network (PDN) electrical parameters, along with waveforms of the CPU cluster running test cases and benchmarks on bare metal and Linux OS. The effect of aggressive power management techniques, such as power gating on the dominant resonant frequency and peak impedance, is highlighted. Finally, we present measurements of supply voltage noise during various scan-test operations, an often-neglected aspect of SoC power integrity.",
"title": ""
},
{
"docid": "b3947afb7856b0ffd5983f293ca508b9",
"text": "High gain low profile slotted cavity with substrate integrated waveguide (SIW) is presented using TE440 high order mode. The proposed antenna is implemented to achieve 16.4 dBi high gain at 28 GHz with high radiation efficiency of 98%. Furthermore, the proposed antenna has a good radiation pattern. Simulated results using CST and HFSS software are presented and discussed. Several advantages such as low profile, low cost, light weight, small size, and easy implementation make the proposed antenna suitable for millimeter-wave wireless communications.",
"title": ""
},
{
"docid": "e3c8f10316152f0bc775f4823b79c7f6",
"text": "The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision.",
"title": ""
},
{
"docid": "6b203b7a8958103b30701ac139eb1fb8",
"text": "Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems-patient classification, fundamental biological processes and treatment of patients-and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.",
"title": ""
},
{
"docid": "70f1f5de73c3a605b296299505fd4e61",
"text": "Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for this intractable averaging operation consists in scaling the layers undergoing dropout randomization. This simple rule called “standard dropout” is efficient, but might degrade the accuracy of the prediction. In this work we introduce a novel approach, coined “dropout distillation”, that allows us to train a predictor in a way to better approximate the intractable, but preferable, averaging process, while keeping under control its computational efficiency. We are thus able to construct models that are as efficient as standard dropout, or even more efficient, while being more accurate. Experiments on standard benchmark datasets demonstrate the validity of our method, yielding consistent improvements over conventional dropout.",
"title": ""
},
{
"docid": "0b9b85dc4f80e087f591f89b12bb6146",
"text": "Entity profiling (EP) as an important task of Web mining and information extraction (IE) is the process of extracting entities in question and their related information from given text resources. From computational viewpoint, the Farsi language is one of the less-studied and less-resourced languages, and suffers from the lack of high quality language processing tools. This problem emphasizes the necessity of developing Farsi text processing systems. As an element of EP research, we present a semantic approach to extract profile of person entities from Farsi Web documents. Our approach includes three major components: (i) pre-processing, (ii) semantic analysis and (iii) attribute extraction. First, our system takes as input the raw text, and annotates the text using existing pre-processing tools. In semantic analysis stage, we analyze the pre-processed text syntactically and semantically and enrich the local processed information with semantic information obtained from a distant knowledge base. We then use a semantic rule-based approach to extract the related information of the persons in question. We show the effectiveness of our approach by testing it on a small Farsi corpus. The experimental results are encouraging and show that the proposed method outperforms baseline methods.",
"title": ""
},
{
"docid": "e32fc572acb93c65083b372a6b24e7ee",
"text": "BACKGROUND\nFemale Genital Mutilation/Cutting (FGM/C) is a harmful traditional practice with severe health complications, deeply rooted in many Sub-Saharan African countries. In The Gambia, the prevalence of FGM/C is 78.3% in women aged between 15 and 49 years. The objective of this study is to perform a first evaluation of the magnitude of the health consequences of FGM/C in The Gambia.\n\n\nMETHODS\nData were collected on types of FGM/C and health consequences of each type of FGM/C from 871 female patients who consulted for any problem requiring a medical gynaecologic examination and who had undergone FGM/C in The Gambia.\n\n\nRESULTS\nThe prevalence of patients with different types of FGM/C were: type I, 66.2%; type II, 26.3%; and type III, 7.5%. Complications due to FGM/C were found in 299 of the 871 patients (34.3%). Even type I, the form of FGM/C of least anatomical extent, presented complications in 1 of 5 girls and women examined.\n\n\nCONCLUSION\nThis study shows that FGM/C is still practiced in all the six regions of The Gambia, the most common form being type I, followed by type II. All forms of FGM/C, including type I, produce significantly high percentages of complications, especially infections.",
"title": ""
}
] | scidocsrr |
6e839fb934a42b42548e7bdbc8f53cd0 | CloudMoV: Cloud-Based Mobile Social TV | [
{
"docid": "8869cab615e5182c7c03f074ead081f7",
"text": "This article introduces the principal concepts of multimedia cloud computing and presents a novel framework. We address multimedia cloud computing from multimedia-aware cloud (media cloud) and cloud-aware multimedia (cloud media) perspectives. First, we present a multimedia-aware cloud, which addresses how a cloud can perform distributed multimedia processing and storage and provide quality of service (QoS) provisioning for multimedia services. To achieve a high QoS for multimedia services, we propose a media-edge cloud (MEC) architecture, in which storage, central processing unit (CPU), and graphics processing unit (GPU) clusters are presented at the edge to provide distributed parallel processing and QoS adaptation for various types of devices.",
"title": ""
}
] | [
{
"docid": "cf0b49aabe042b93be0c382ad69e4093",
"text": "This paper shows a technique to enhance the resolution of a frequency modulated continuous wave (FMCW) radar system. The range resolution of an FMCW radar system is limited by the bandwidth of the transmitted signal. By using high resolution methods such as the Matrix Pencil Method (MPM) it is possible to enhance the resolution. In this paper a new method to obtain a better resolution for FMCW radar systems is used. This new method is based on the MPM and is enhanced to require less computing power. To evaluate this new technique, simulations and measurements are used. The result shows that this new method is able to improve the performance of FMCW radar systems.",
"title": ""
},
{
"docid": "48c28572e5eafda1598a422fa1256569",
"text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.",
"title": ""
},
{
"docid": "6daa93f2a7cfaaa047ecdc04fb802479",
"text": "Facial landmark localization is important to many facial recognition and analysis tasks, such as face attributes analysis, head pose estimation, 3D face modelling, and facial expression analysis. In this paper, we propose a new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN). We make two enhancements on the CNN to adapt it to the feature localization task as follows. Firstly, we replace the commonly used max pooling by depth-wise convolution to obtain better localization performance. Secondly, we define a response map for each facial points as a 2D probability map indicating the presence likelihood, and train our model with a KL divergence loss. To obtain robust localization results, our approach first takes the expectations of the response maps of Enhanced CNN and then applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations. The proposed ECNN method achieves 5.32% mean error on the experiments on the 300-W dataset, which is comparable to the state-of-the-art performance on this standard benchmark, showing the effectiveness of our methods.",
"title": ""
},
{
"docid": "faf9f552aa52fcf615447e73c54bda5e",
"text": "Physicists use quantum models to describe the behavior of physical systems. Quantum models owe their success to their interpretability, to their relation to probabilistic models (quantization of classical models) and to their high predictive power. Beyond physics, these properties are valuable in general data science. This motivates the use of quantum models to analyze general nonphysical datasets. Here we provide both empirical and theoretical insights into the application of quantum models in data science. In the theoretical part of this paper, we firstly show that quantum models can be exponentially more efficient than probabilistic models because there exist datasets that admit low-dimensional quantum models and only exponentially high-dimensional probabilistic models. Secondly, we explain in what sense quantum models realize a useful relaxation of compressed probabilistic models. Thirdly, we show that sparse datasets admit low-dimensional quantum models and finally, we introduce a method to compute hierarchical orderings of properties of users (e.g., personality traits) and items (e.g., genres of movies). In the empirical part of the paper, we evaluate quantum models in item recommendation and observe that the predictive power of quantum-inspired recommender systems can compete with state-of-the-art recommender systems like SVD++ and PureSVD. Furthermore, we make use of the interpretability of quantum models by computing hierarchical orderings of properties of users and items. This work establishes a connection between data science (item recommendation), information theory (communication complexity), mathematical programming (positive semidefinite factorizations) and physics (quantum models).",
"title": ""
},
{
"docid": "2dee6efe4d5e27601e96e5229ca4d622",
"text": "This report deals with translation invariance of convolutional neural networks (CNNs) for automatic target recognition (ATR) from synthetic aperture radar (SAR) imagery. In particular, the translation invariance of CNNs for SAR ATR represents the robustness against misalignment of target chips extracted from SAR images. To understand the translation invariance of the CNNs, we trained CNNs which classify the target chips from the MSTAR into the ten classes under the condition of with and without data augmentation, and then visualized the translation invariance of the CNNs. According to our results, even if we use a deep residual network, the translation invariance of the CNN without data augmentation using the aligned images such as the MSTAR target chips is not so large. A more important factor of translation invariance is the use of augmented training data. Furthermore, our CNN using augmented training data achieved a state-of-the-art classification accuracy of 99.6%. These results show an importance of domain-specific data augmentation.",
"title": ""
},
{
"docid": "d119e4dea72ac5ff772ab997c1d70955",
"text": "This paper describes the design of an adaptive intelligent augmented reality serious game which aims to foster problem solving skills in young learners. Studies show that our students lack computational thinking skills in high school, which raises the need to establish new methods to develop these skills in our younger learners. We believe that problem solving skills are the fundamental skills of computational thinking and are critical for STEM, in addition to a broad range of other fields. Therefore we decided to focus on those meta-cognitive skills acquired to foster problem solving, such as strategic knowledge. The game described in this paper provides a unique adaptive learning environment that aims to develop learners’ meta-cognitive skills by utilizing augmented reality technology, believable pedagogical agents and intelligent tutoring modules. It offers a great user experience and entertainment which we hope will encourage learners to invest more time in the learning process. This paper describes the architecture and design of the game from the viewpoint of educational pedagogies and frameworks for serious game design.",
"title": ""
},
{
"docid": "b92851e1c50db1af8ec26734f472d989",
"text": "A new reflection-type phase shifter with a full 360deg relative phase shift range and constant insertion loss is presented. This feature is obtained by incorporating a new cascaded connection of varactors into the impedance-transforming quadrature coupler. The required reactance variation of a varactor can be reduced by controlling the impedance ratio of the quadrature coupler. The implemented phase shifter achieves a measured maximal relative phase shift of 407deg, an averaged insertion loss of 4.4 dB and return losses better than 19 dB at 2 GHz. The insertion-loss variation is within plusmn0.1 and plusmn0.2 dB over the 360deg and 407deg relative phase shift tuning range, respectively.",
"title": ""
},
{
"docid": "c1ae8ea2da982e5094fdd9816e249b53",
"text": "Corporate Social Responsibility (CSR) reporting receives much attention nowadays. Communication with stakeholders is a part of assumed social responsibility, thus the quality of information disclosed in CSR reports has a significant impact on fulfilment of the responsibility. The authors use content analysis of selected CSR reports to describe and assess patterns and structure of information disclosed in them. CSR reports of Polish companies have similar structures at a very high level of analysis, but a more detailed study reveals much diversity in approaches to the report’s content. Even fairly similar companies may devote significantly different amounts of space to the same issue. The number of similar stakeholders varies irrespectively of the company’s size. Considerable diversity of reporting patterns results from the nature of CSR reporting, because it concerns highly entity-specific issues. Thus, such considerable diversity is not surprising. However, many initiatives and efforts are devoted to greater comparability of reporting, so a greater degree of uniformity can be expected. Similar conclusions may be drawn from integrated reports’ analysis, though a small sample reflects the relative novelty of this trend.",
"title": ""
},
{
"docid": "63e8cf0d01b07bedb2cc0d182dff5e3e",
"text": "Machine Reading and Comprehension recently has drawn a fair amount of attention in the field of natural language processing. In this paper, we consider integrating side information to improve machine comprehension on answering cloze-style questions more precisely. To leverage the external information, we present a novel attention-based architecture which could feed the side information representations into word level embeddings to explore the comprehension performance. Our experiments show consistent improvements of our model over various baselines.",
"title": ""
},
{
"docid": "080ba812015389cd8c1e5546b23acded",
"text": "In recent years, time-resolved multivariate pattern analysis (MVPA) has gained much popularity in the analysis of electroencephalography (EEG) and magnetoencephalography (MEG) data. However, MVPA may appear daunting to those who have been applying traditional analyses using event-related potentials (ERPs) or event-related fields (ERFs). To ease this transition, we recently developed the Amsterdam Decoding and Modeling (ADAM) toolbox in MATLAB. ADAM is an entry-level toolbox that allows a direct comparison of ERP/ERF results to MVPA results using any dataset in standard EEGLAB or Fieldtrip format. The toolbox performs and visualizes multiple-comparison corrected group decoding and forward encoding results in a variety of ways, such as classifier performance across time, temporal generalization (time-by-time) matrices of classifier performance, channel tuning functions (CTFs) and topographical maps of (forward-transformed) classifier weights. All analyses can be performed directly on raw data or can be preceded by a time-frequency decomposition of the data in which case the analyses are performed separately on different frequency bands. The figures ADAM produces are publication-ready. In the current manuscript, we provide a cookbook in which we apply a decoding analysis to a publicly available MEG/EEG dataset involving the perception of famous, non-famous and scrambled faces. The manuscript covers the steps involved in single subject analysis and shows how to perform and visualize a subsequent group-level statistical analysis. The processing pipeline covers computation and visualization of group ERPs, ERP difference waves, as well as MVPA decoding results. It ends with a comparison of the differences and similarities between EEG and MEG decoding results. The manuscript has a level of description that allows application of these analyses to any dataset in EEGLAB or Fieldtrip format.",
"title": ""
},
{
"docid": "f3c5a1cef29f5fa834433ce859b15694",
"text": "This paper describes the design, construction, and testing of a 750-V 100-kW 20-kHz bidirectional isolated dual-active-bridge dc-dc converter using four 1.2-kV 400-A SiC-MOSFET/SBD dual modules. The maximum conversion efficiency from the dc-input to the dc-output terminals is accurately measured to be as high as 98.7% at 42-kW operation. The overall power loss at the rated-power (100 kW) operation, excluding the gate-drive and control circuit losses, is divided into the conduction and switching losses produced by the SiC modules, the iron and copper losses due to magnetic devices, and the other unknown loss. The power-loss breakdown concludes that the sum of the conduction and switching losses is about 60% of the overall power loss and that the conduction loss is nearly equal to the switching loss at the 100-kW and 20-kHz operation.",
"title": ""
},
{
"docid": "5f9da666504ade5b661becfd0a648978",
"text": "cefe.cnrs-mop.fr Under natural selection, individuals tend to adapt to their local environmental conditions, resulting in a pattern of LOCAL ADAPTATION (see Glossary). Local adaptation can occur if the direction of selection changes for an allele among habitats (antagonistic environmental effect), but it might also occur if the intensity of selection at several loci that are maintained as polymorphic by recurrent mutations covaries negatively among habitats. These two possibilities have been clearly identified in the related context of the evolution of senescence but have not have been fully appreciated in empirical and theoretical studies of local adaptation [1,2].",
"title": ""
},
{
"docid": "8cea62bdb8b4ce82a8b2d931ef20b0f2",
"text": "This paper addresses the Volume dimension of Big Data. It presents a preliminary work on finding segments of retailers from a large amount of Electronic Funds Transfer at Point Of Sale (EFTPOS) transaction data. To the best of our knowledge, this is the first time a work on Big EFTPOS Data problem has been reported. A data reduction technique using the RFM (Recency, Frequency, Monetary) analysis as applied to a large data set is presented. Ways to optimise clustering techniques used to segment the big data set through data partitioning and parallelization are explained. Preliminary analysis on the segments of the retailers output from the clustering experiments demonstrates that further drilling down into the retailer segments to find more insights into their business behaviours is warranted.",
"title": ""
},
{
"docid": "073ec1e3b8c6feab18f2ae53eab5cc24",
"text": "Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple/complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model.",
"title": ""
},
{
"docid": "72b080856124d39b62d531cb52337ce9",
"text": "Experimental and clinical studies have identified a crucial role of microcirculation impairment in severe infections. We hypothesized that mottling, a sign of microcirculation alterations, was correlated to survival during septic shock. We conducted a prospective observational study in a tertiary teaching hospital. All consecutive patients with septic shock were included during a 7-month period. After initial resuscitation, we recorded hemodynamic parameters and analyzed their predictive value on mortality. The mottling score (from 0 to 5), based on mottling area extension from the knees to the periphery, was very reproducible, with an excellent agreement between independent observers [kappa = 0.87, 95% CI (0.72–0.97)]. Sixty patients were included. The SOFA score was 11.5 (8.5–14.5), SAPS II was 59 (45–71) and the 14-day mortality rate 45% [95% CI (33–58)]. Six hours after inclusion, oliguria [OR 10.8 95% CI (2.9, 52.8), p = 0.001], arterial lactate level [<1.5 OR 1; between 1.5 and 3 OR 3.8 (0.7–29.5); >3 OR 9.6 (2.1–70.6), p = 0.01] and mottling score [score 0–1 OR 1; score 2–3 OR 16, 95% CI (4–81); score 4–5 OR 74, 95% CI (11–1,568), p < 0.0001] were strongly associated with 14-day mortality, whereas the mean arterial pressure, central venous pressure and cardiac index were not. The higher the mottling score was, the earlier death occurred (p < 0.0001). Patients whose mottling score decreased during the resuscitation period had a better prognosis (14-day mortality 77 vs. 12%, p = 0.0005). The mottling score is reproducible and easy to evaluate at the bedside. The mottling score as well as its variation during resuscitation is a strong predictor of 14-day survival in patients with septic shock.",
"title": ""
},
{
"docid": "5c58eb86ec2fb61a4c26446a41a9037a",
"text": "The filter bank methods have been a popular non-parametric way of computing the complex amplitude spectrum. So far, the length of the filters in these filter banks has been set to some constant value independently of the data. In this paper, we take the first step towards considering the filter length as an unknown parameter. Specifically, we derive a very simple and approximate way of determining the optimal filter length in a data-adaptive way. Based on this analysis, we also derive a model averaged version of the forward and the forward-backward amplitude spectral Capon estimators. Through simulations, we show that these estimators significantly improve the estimation accuracy compared to the traditional Capon estimators.",
"title": ""
},
{
"docid": "1a90c5688663bcb368d61ba7e0d5033f",
"text": "Content-based audio classification and segmentation is a basis for further audio/video analysis. In this paper, we present our work on audio segmentation and classification which employs support vector machines (SVMs). Five audio classes are considered in this paper: silence, music, background sound, pure speech, and non- pure speech which includes speech over music and speech over noise. A sound stream is segmented by classifying each sub-segment into one of these five classes. We have evaluated the performance of SVM on different audio type-pairs classification with testing unit of different- length and compared the performance of SVM, K-Nearest Neighbor (KNN), and Gaussian Mixture Model (GMM). We also evaluated the effectiveness of some new proposed features. Experiments on a database composed of about 4- hour audio data show that the proposed classifier is very efficient on audio classification and segmentation. It also shows the accuracy of the SVM-based method is much better than the method based on KNN and GMM.",
"title": ""
},
{
"docid": "161bf0c4abd39223f881510594b459d8",
"text": "This paper describes a set of comparative exper iments for the problem of automatically ltering unwanted electronic mail messages Several vari ants of the AdaBoost algorithm with con dence rated predictions Schapire Singer have been applied which di er in the complexity of the base learners considered Two main conclu sions can be drawn from our experiments a The boosting based methods clearly outperform the baseline learning algorithms Naive Bayes and Induction of Decision Trees on the PU corpus achieving very high levels of the F measure b Increasing the complexity of the base learners al lows to obtain better high precision classi ers which is a very important issue when misclassi cation costs are considered",
"title": ""
},
{
"docid": "460e8daf5dfc9e45c3ade5860aa9cc57",
"text": "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the planner. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games, with deeper trees often outperforming shallower ones. We also present a qualitative analysis that sheds light on the trees learned by TreeQN.",
"title": ""
}
] | scidocsrr |
281a38b008e433a49825c69381ae6e7e | Automatically Predicting Peer-Review Helpfulness | [
{
"docid": "6d2abcdd728a2355259c60c870b411a4",
"text": "Although providing feedback is commonly practiced in education, there is no general agreement regarding what type of feedback is most helpful and why it is helpful. This study examined the relationship between various types of feedback, potential internal mediators, and the likelihood of implementing feedback. Five main predictions were developed from the feedback literature in writing, specifically regarding feedback features (summarization, identifying problems, providing solutions, localization, explanations, scope, praise, and mitigating language) as they relate to potential causal mediators of problem or solution understanding and problem or solution agreement, leading to the final outcome of feedback implementation. To empirically test the proposed feedback model, 1,073 feedback segments from writing assessed by peers was analyzed. Feedback was collected using SWoRD, an online peer review system. Each segment was coded for each of the feedback features, implementation, agreement, and understanding. The correlations between the feedback features, levels of mediating variables, and implementation rates revealed several significant relationships. Understanding was the only significant mediator of implementation. Several feedback features were associated with understanding: including solutions, a summary of the performance, and the location of the problem were associated with increased understanding; and explanations of problems were associated with decreased understanding. Implications of these results are discussed.",
"title": ""
},
{
"docid": "5f366ed9a90448be28c1ec9249b4ec96",
"text": "With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we reexamine the impact of reviews on economic outcomes like product sales and see how different factors affect social outcomes such as their perceived usefulness. Our approach explores multiple aspects of review text, such as subjectivity levels, various measures of readability and extent of spelling errors to identify important text-based features. In addition, we also examine multiple reviewer-level features such as average usefulness of past reviews and the self-disclosed identity measures of reviewers that are displayed next to a review. Our econometric analysis reveals that the extent of subjectivity, informativeness, readability, and linguistic correctness in reviews matters in influencing sales and perceived usefulness. Reviews that have a mixture of objective, and highly subjective sentences are negatively associated with product sales, compared to reviews that tend to include only subjective or only objective information. However, such reviews are rated more informative (or helpful) by other users. By using Random Forest-based classifiers, we show that we can accurately predict the impact of reviews on sales and their perceived usefulness. We examine the relative importance of the three broad feature categories: “reviewer-related” features, “review subjectivity” features, and “review readability” features, and find that using any of the three feature sets results in a statistically equivalent performance as in the case of using all available features. This paper is the first study that integrates econometric, text mining, and predictive modeling techniques toward a more complete analysis of the information captured by user-generated online reviews in order to estimate their helpfulness and economic impact.",
"title": ""
}
] | [
{
"docid": "711b8ac941db1e6e1eef093ca340717b",
"text": "Deep neural networks (DNNs) have a wide range of applications, and software employing them must be thoroughly tested, especially in safety critical domains. However, traditional software testing methodology, including test coverage criteria and test case generation algorithms, cannot be applied directly to DNNs. This paper bridges this gap. First, inspired by the traditional MC/DC coverage criterion, we propose a set of four test criteria that are tailored to the distinct features of DNNs. Our novel criteria are incomparable and complement each other. Second, for each criterion, we give an algorithm for generating test cases based on linear programming (LP). The algorithms produce a new test case (i.e., an input to the DNN) by perturbing a given one. They encode the test requirement and a fragment of the DNN by fixing the activation pattern obtained from the given input example, and then minimize the difference between the new and the current inputs. Finally, we validate our method on a set of networks trained on the MNIST dataset. The utility of our method is shown experimentally with four objectives: (1) bug finding; (2) DNN safety statistics; (3) testing efficiency and (4) DNN internal structure analysis.",
"title": ""
},
{
"docid": "6dc4e4949d4f37f884a23ac397624922",
"text": "Research indicates that maladaptive patterns of Internet use constitute behavioral addiction. This article explores the research on the social effects of Internet addiction. There are four major sections. The Introduction section overviews the field and introduces definitions, terminology, and assessments. The second section reviews research findings and focuses on several key factors related to Internet addiction, including Internet use and time, identifiable problems, gender differences, psychosocial variables, and computer attitudes. The third section considers the addictive potential of the Internet in terms of the Internet, its users, and the interaction of the two. The fourth section addresses current and projected treatments of Internet addiction, suggests future research agendas, and provides implications for educational psychologists.",
"title": ""
},
{
"docid": "6bdcac1d424162a89adac7fa2a6221ae",
"text": "The growing popularity of online product review forums invites people to express opinions and sentiments toward the products .It gives the knowledge about the product as well as sentiment of people towards the product. These online reviews are very important for forecasting the sales performance of product. In this paper, we discuss the online review mining techniques in movie domain. Sentiment PLSA which is responsible for finding hidden sentiment factors in the reviews and ARSA model used to predict sales performance. An Autoregressive Sentiment and Quality Aware model (ARSQA) also in consideration for to build the quality for predicting sales performance. We propose clustering and classification based algorithm for sentiment analysis.",
"title": ""
},
{
"docid": "ee9d84f08326cf48116337595dbe07f7",
"text": "Facial fractures were described as early as the seventeenth century BC in the Edwin Smith surgical papyrus. In the eighteenth century, the French surgeon Desault described the unique propensity of the mandible to fracture in the narrow subcondylar region, which is commonly observed to this day. In a recent 5-year review of the National Trauma Data Base with more than 13,000 mandible fractures, condylar and subcondylar fractures made up 14.8% and 12.6% of all fractures respectively; taken together, more than any other site alone. This study, along with others, have confirmed that most modern-age condylar fractures occur in men, and are most often caused by motor vehicle accidents, and assaults. Historically, condylar fractures were managed in a closed fashion with various forms of immobilization or maxillomandibular fixation, with largely favorable results. Although the goals of treatment are the restoration of form and function, closed treatment relies on patient adaptation to an altered anatomy, because anatomic repositioning of the proximal segment is not achieved. However, the human body has a remarkable ability to adapt, and it remains an appropriate treatment of a large number of condylar fractures, including intracapsular fractures, fractures with minimal or no displacement, almost all pediatric condylar fractures, and fractures in patients whose medical or social situations preclude other forms of treatment. With advances in the understanding of osteosynthesis and an appreciation of surgical anatomy, open",
"title": ""
},
{
"docid": "7b6e811ea3f227c33755049355949eaf",
"text": "We revisit the task of learning a Euclidean metric from data. We approach this problem from first principles and formulate it as a surprisingly simple optimization problem. Indeed, our formulation even admits a closed form solution. This solution possesses several very attractive propertie s: (i) an innate geometric appeal through the Riemannian geometry of positive definite matrices; (ii) ease of interpretability; and (iii) computational speed several orders of magnitude faster tha n the widely used LMNN and ITML methods. Furthermore, on standard benchmark datasets, our closed-form solution consist ently attains higher classification accuracy.",
"title": ""
},
{
"docid": "1ecf6f45f0dabd484bc736a5b54fda91",
"text": "BACKGROUND\nDaily suppressive therapy with valacyclovir reduces risk of sexual transmission of herpes simplex virus type 2 (HSV-2) in HSV-2-serodiscordant heterosexual couples by 48%. Whether suppressive therapy reduces HSV-2 transmission from persons coinfected with HSV-2 and human immunodeficiency virus type 1 (HIV-1) is unknown.\n\n\nMETHODS\nWithin a randomized trial of daily acyclovir 400 mg twice daily in African HIV-1 serodiscordant couples, in which the HIV-1-infected partner was HSV-2 seropositive, we identified partnerships in which HIV-1-susceptible partners were HSV-2 seronegative to estimate the effect of acyclovir on risk of HSV-2 transmission.\n\n\nRESULTS\nWe randomly assigned 911 HSV-2/HIV-1-serodiscordant couples to daily receipt of acyclovir or placebo. We observed 68 HSV-2 seroconversions, 40 and 28 in acyclovir and placebo groups, respectively (HSV-2 incidence, 5.1 cases per 100 person-years; hazard ratio [HR], 1.35 [95% confidence interval, .83-2.20]; P = .22). Among HSV-2-susceptible women, vaginal drying practices (adjusted HR, 44.35; P = .004) and unprotected sex (adjusted HR, 9.91; P = .002) were significant risk factors for HSV-2 acquisition; having more children was protective (adjusted HR, 0.47 per additional child; P = .012). Among HSV-2-susceptible men, only age ≤30 years was associated with increased risk of HSV-2 acquisition (P = .016).\n\n\nCONCLUSIONS\nTreatment of African HSV-2/HIV-1-infected persons with daily suppressive acyclovir did not decrease risk of HSV-2 transmission to susceptible partners. More-effective prevention strategies to reduce HSV-2 transmission from HIV-1-infected persons are needed.",
"title": ""
},
{
"docid": "06129167c187b96e3c064e05c2b475f8",
"text": "Elderly patients with acute myeloid leukemia (AML) who are refractory to or relapse following frontline treatment constitute a poor-risk group with a poor long-term outcome. Host-related factors and unfavorable disease-related features contribute to early treatment failures following frontline therapy, thus making attainment of remission and long-term survival with salvage therapy particularly challenging for elderly patients. Currently, no optimal salvage strategy exists for responding patients, and allogeneic hematopoietic stem cell transplant is the only curative option in this setting; however, the vast majority of elderly patients are not candidates for this procedure due to poor functional status secondary to age and age-related comorbidities. Furthermore, the lack of effective salvage programs available for elderly patients with recurrent AML underscores the need for therapies that consistently yield durable remissions or durable control of their disease. The purpose of this review was to highlight the currently available strategies, as well as future strategies under development, for treating older patients with recurrent AML.",
"title": ""
},
{
"docid": "81c90998c5e456be34617e702dbfa4f5",
"text": "In this paper, a new unsupervised learning algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), is proposed. To exploit the discriminative information in unsupervised scenarios, we perform spectral clustering to learn the cluster labels of the input samples, during which the feature selection is performed simultaneously. The joint learning of the cluster labels and feature selection matrix enables NDFS to select the most discriminative features. To learn more accurate cluster labels, a nonnegative constraint is explicitly imposed to the class indicators. To reduce the redundant or even noisy features, `2,1-norm minimization constraint is added into the objective function, which guarantees the feature selection matrix sparse in rows. Our algorithm exploits the discriminative information and feature correlation simultaneously to select a better feature subset. A simple yet efficient iterative algorithm is designed to optimize the proposed objective function. Experimental results on different real world datasets demonstrate the encouraging performance of our algorithm over the state-of-the-arts. Introduction The dimension of data is often very high in many domains (Jain and Zongker 1997; Guyon and Elisseeff 2003), such as image and video understanding (Wang et al. 2009a; 2009b), and bio-informatics. In practice, not all the features are important and discriminative, since most of them are often correlated or redundant to each other, and sometimes noisy (Duda, Hart, and Stork 2001; Liu, Wu, and Zhang 2011). These features may result in adverse effects in some learning tasks, such as over-fitting, low efficiency and poor performance (Liu, Wu, and Zhang 2011). Consequently, it is necessary to reduce dimensionality, which can be achieved by feature selection or transformation to a low dimensional space. In this paper, we focus on feature selection, which is to choose discriminative features by eliminating the ones with little or no predictive information based on certain criteria. Many feature selection algorithms have been proposed, which can be classified into three main families: filter, wrapper, and embedded methods. The filter methods (Duda, Hart, Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and Stork 2001; He, Cai, and Niyogi 2005; Zhao and Liu 2007; Masaeli, Fung, and Dy 2010; Liu, Wu, and Zhang 2011; Yang et al. 2011a) use statistical properties of the features to filter out poorly informative ones. They are usually performed before applying classification algorithms. They select a subset of features only based on the intrinsic properties of the data. In the wrapper approaches (Guyon and Elisseeff 2003; Rakotomamonjy 2003), feature selection is “wrapped” in a learning algorithm and the classification performance of features is taken as the evaluation criterion. Embedded methods (Vapnik 1998; Zhu et al. 2003) perform feature selection in the process of model construction. In contrast with filter methods, wrapper and embedded methods are tightly coupled with in-built classifiers, which causes that they are less generality and computationally expensive. In this paper, we focus on the filter feature selection algorithm. Because of the importance of discriminative information in data analysis, it is beneficial to exploit discriminative information for feature selection, which is usually encoded in labels. However, how to select discriminative features in unsupervised scenarios is a significant but hard task due to the lack of labels. In light of this, we propose a novel unsupervised feature selection algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), in this paper. We perform spectral clustering and feature selection simultaneously to select the discriminative features for unsupervised learning. The cluster label indicators are obtained by spectral clustering to guide the feature selection procedure. Different from most of the previous spectral clustering algorithms (Shi and Malik 2000; Yu and Shi 2003), we explicitly impose a nonnegative constraint into the objective function, which is natural and reasonable as discussed later in this paper. With nonnegative and orthogonality constraints, the learned cluster indicators are much closer to the ideal results and can be readily utilized to obtain cluster labels. Our method exploits the discriminative information and feature correlation in a joint framework. For the sake of feature selection, the feature selection matrix is constrained to be sparse in rows, which is formulated as `2,1-norm minimization term. To solve the proposed problem, a simple yet effective iterative algorithm is proposed. Extensive experiments are conducted on different datasets, which show that the proposed approach outperforms the state-of-the-arts in different applications. Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence",
"title": ""
},
{
"docid": "0105070bd23400083850627b1603af0b",
"text": "This research covers an endeavor by the author on the usage of automated vision and navigation framework; the research is conducted by utilizing a Kinect sensor requiring minimal effort framework for exploration purposes in the zone of robot route. For this framework, GMapping (a highly efficient Rao-Blackwellized particle filer to learn grid maps from laser range data) parameters have been optimized to improve the accuracy of the map generation and the laser scan. With the use of Robot Operating System (ROS), the open source GMapping bundle was utilized as a premise for a map era and Simultaneous Localization and Mapping (SLAM). Out of the many different map generation techniques, the tele-operation used is interactive marker, which controls the TurtleBot 2 movements via RVIZ (3D visualization tool for ROS). Test results completed with the multipurpose robot in a counterfeit and regular environment represents the preferences of the proposed strategy. From experiments, it is found that Kinect sensor produces a more accurate map compared to non-filtered laser range finder data, which is excellent since the price of a Kinect sensor is much cheaper than a laser range finder. An expansion of experimental results was likewise done to test the performance of the portable robot frontier exploring in an obscure environment while performing SLAM alongside the proposed technique.",
"title": ""
},
{
"docid": "dc198f396142376e36d7143a5bfe7d19",
"text": "Successful direct pulp capping of cariously exposed permanent teeth with reversible pulpitis and incomplete apex formation can prevent the need for root canal treatment. A case report is presented which demonstrates the use of mineral trioxide aggregate (MTA) as a direct pulp capping material for the purpose of continued maturogenesis of the root. Clinical and radiographic follow-up demonstrated a vital pulp and physiologic root development in comparison with the contralateral tooth. MTA can be considered as an effective material for vital pulp therapy, with the goal of maturogenesis.",
"title": ""
},
{
"docid": "4cfedb5e516692b12a610c4211e6fdd4",
"text": "Supporters of market-based education reforms argue that school autonomy and between-school competition can raise student achievement. Yet U.S. reforms based in part on these ideas charter schools, school-based management, vouchers and school choice are limited in scope, complicating evaluations of their impact. In contrast, a series of remarkable reforms enacted by the Thatcher Government in Britain in the 1980s provide an ideal testing ground for examining the effects of school autonomy and between-school competition. In this paper I study one reform described by Chubb and Moe (1992) as ‘truly revolutionary’ that allowed public high schools to ‘opt out’ of the local school authority and become quasi-independent, funded directly by central Government. In order to opt out schools had to first win a majority vote of current parents, and I assess the impact of school autonomy via a regression discontinuity design, comparing student achievement levels at schools where the vote barely won to those where it barely lost. To assess the effects of competition I use this same idea to compare student achievement levels at neighbouring schools of barely winners to neighbouring schools of barely losers. My results suggest two conclusions. First, there were large gains to schools that won the vote and opted out, on the order of a onequarter standard deviation improvement on standardised national examinations. Since results improved for those students already enrolled in the school at the time of the vote, this outcome is not likely to be driven by changes in student-body composition (cream-skimming). Second, the gains enjoyed by the opted-out schools appear not to have spilled over to their neighbours I can never reject the hypothesis of no spillovers and can always reject effects bigger than one half of the ‘own-school’ impact. I interpret my results as supportive of education reforms that seek to hand power to schools, with the caveat that I do not know precisely what opted-out schools did to improve. With regards to competition, although I cannot rule out small but economically important competition effects, my results suggest caution as to the likely benefits.",
"title": ""
},
{
"docid": "86c19291942c1eeeb38abd1531801731",
"text": "There exist a lot of challenges in trajectory planning for autonomous driving: 1) Needs of both spatial and temporal planning for highly dynamic environments; 2) Nonlinear vehicle models and non-convex collision avoidance constraints. 3) High computational efficiency for real-time implementation. Iterative Linear Quadratic Regulator (ILQR) is an algorithm which solves predictive optimal control problem with nonlinear system very efficiently. However, it can not deal with constraints. In this paper, the Constrained Iterative LQR (CILQR) is proposed to handle the constraints in ILQR. Then an on road driving problem is formulated. Simulation case studies show the capability of the CILQR algorithm to solve the on road driving motion planning problem.",
"title": ""
},
{
"docid": "bc1f6f7a18372ce618c82f94a3091fd9",
"text": "THE INTERNATIONAL JOURNAL OF ESTHETIC DENTISTRY The management of individual cases presents each clinician with a variety of attractive options and sophisticated evidence-based solutions. Financial constraints can often restrict these options and limit the choice pathways that can be offered. The case presented here demonstrates the management of severe erosion on the maxillary anterior teeth via a minimally invasive, practical, and economic route. When tooth surface loss occurs,1 it can be clinically challenging to isolate a single etiological factor since it is usually multifactorial in origin. The patient presented with the classic signs of erosion (Fig 1a). A major causative factor of this erosion was a large consumption of carbonated beverages on a daily basis over a number of years. Chronic exposure of dental hard tissues to acidic substrates led to extensive enamel and dentin loss from both intrinsic and extrinsic sources (Fig 1b and c). The ACE classification guides the clinician on the management options of treatment modalities, which are dependent on the severity of the erosion.2 A clinical case involving severe erosion of the maxillary anterior teeth restored with direct composite resin restorations",
"title": ""
},
{
"docid": "e1485bddbab0c3fa952d045697ff2112",
"text": "The diversity of an ensemble of classifiers is known to be an important factor in determining its generalization error. We present a new method for generating ensembles, Decorate (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples), that directly constructs diverse hypotheses using additional artificially-constructed training examples. The technique is a simple, general meta-learner that can use any strong learner as a base classifier to build diverse committees. Experimental results using decision-tree induction as a base learner demonstrate that this approach consistently achieves higher predictive accuracy than the base classifier, Bagging and Random Forests. Decorate also obtains higher accuracy than Boosting on small training sets, and achieves comparable performance on larger training sets.",
"title": ""
},
{
"docid": "34546e42bd78161259d2bc190e36c9f7",
"text": "Peer to Peer networks are the leading cause for music piracy but also used for music sampling prior to purchase. In this paper we investigate the relations between music file sharing and sales (both physical and digital)using large Peer-to-Peer query database information. We compare file sharing information on songs to their popularity on the Billboard Hot 100 and the Billboard Digital Songs charts, and show that popularity trends of songs on the Billboard have very strong correlation (0.88-0.89) to their popularity on a Peer-to-Peer network. We then show how this correlation can be utilized by common data mining algorithms to predict a song's success in the Billboard in advance, using Peer-to-Peer information.",
"title": ""
},
{
"docid": "9bb0ee77990ead987b49ab4180edd99f",
"text": "Stacked graphs are a visualization technique popular in casual scenarios for representing multiple time-series. Variations of stacked graphs have been focused on reducing the distortion of individual streams because foundational perceptual studies suggest that variably curved slopes may make it difficult to accurately read and compare values. We contribute to this discussion by formally comparing the relative readability of basic stacked area charts, ThemeRivers, streamgraphs and our own interactive technique for straightening baselines of individual streams in a ThemeRiver. We used both real-world and randomly generated datasets and covered tasks at the elementary, intermediate and overall information levels. Results indicate that the decreased distortion of the newer techniques does appear to improve their readability, with streamgraphs performing best for value comparison tasks. We also found that when a variety of tasks is expected to be performed, using the interactive version of the themeriver leads to more correctness at the cost of being slower for value comparison tasks.",
"title": ""
},
{
"docid": "b9e785238c4fb438bada46f196915cdc",
"text": "* Faculty of Information Technology, Rangsit University. Abstract With the rapidly increasing number of Thai text documents available in digital media and websites, it is important to find an efficient text indexing technique to facilitate search and retrieval. An efficient index would speed up the response time and improve the accessibility of the documents. Up to now, not much research in Thai text indexing has been conducted as compared to more commonly used languages like English or other European languages. In Thai text indexing, the extraction of indexing terms becomes a main issue because they cannot be specified automatically from text documents, due to the nature of Thai texts being non-segmented. As a result, there are many challenges for indexing Thai text documents. The ma-jority of Thai text indexing techniques can be divided into two main categories: a language-dependent technique and a lan-guage-independent technique as will be described in this paper.",
"title": ""
},
{
"docid": "68810ad35e71ea7d080e7433e227e40e",
"text": "Mobile devices, ubiquitous in modern lifestyle, embody and provide convenient access to our digital lives. Being small and mobile, they are easily lost or stole, therefore require strong authentication to mitigate the risk of unauthorized access. Common knowledge-based mechanism like PIN or pattern, however, fail to scale with the high frequency but short duration of device interactions and ever increasing number of mobile devices carried simultaneously. To overcome these limitations, we present CORMORANT, an extensible framework for risk-aware multi-modal biometric authentication across multiple mobile devices that offers increased security and requires less user interaction.",
"title": ""
},
{
"docid": "f6ec04f704c58514865206f759ac6d67",
"text": "Speech recognition is the key to realize man-machine interface technology. In order to improve the accuracy of speech recognition and implement the module on embedded system, an embedded speaker-independent isolated word speech recognition system based on ARM is designed after analyzing speech recognition theory. The system uses DTW algorithm and improves the algorithm using a parallelogram to extract characteristic parameters and identify the results. To finish the speech recognition independently, the system uses the STM32 series chip combined with the other external circuitry. The results of speech recognition test can achieve 90%, and which meets the real-time requirements of recognition.",
"title": ""
},
{
"docid": "2f0d6b9bee323a75eea3d15a3cabaeb6",
"text": "OBJECTIVE\nThis article reviews the mechanisms and pathophysiology of traumatic brain injury (TBI).\n\n\nMETHODS\nResearch on the pathophysiology of diffuse and focal TBI is reviewed with an emphasis on damage that occurs at the cellular level. The mechanisms of injury are discussed in detail including the factors and time course associated with mild to severe diffuse injury as well as the pathophysiology of focal injuries. Examples of electrophysiologic procedures consistent with recent theory and research evidence are presented.\n\n\nRESULTS\nAcceleration/deceleration (A/D) forces rarely cause shearing of nervous tissue, but instead, initiate a pathophysiologic process with a well defined temporal progression. The injury foci are considered to be diffuse trauma to white matter with damage occurring at the superficial layers of the brain, and extending inward as A/D forces increase. Focal injuries result in primary injuries to neurons and the surrounding cerebrovasculature, with secondary damage occurring due to ischemia and a cytotoxic cascade. A subset of electrophysiologic procedures consistent with current TBI research is briefly reviewed.\n\n\nCONCLUSIONS\nThe pathophysiology of TBI occurs over time, in a pattern consistent with the physics of injury. The development of electrophysiologic procedures designed to detect specific patterns of change related to TBI may be of most use to the neurophysiologist.\n\n\nSIGNIFICANCE\nThis article provides an up-to-date review of the mechanisms and pathophysiology of TBI and attempts to address misconceptions in the existing literature.",
"title": ""
}
] | scidocsrr |
61058cb53983b7feb06f34550353def8 | Information extraction challenges in managing unstructured data | [
{
"docid": "54c6e02234ce1c0f188dcd0d5ee4f04c",
"text": "The World Wide Web is a vast resource for information. At the same time it is extremely distributed. A particular type of data such as restaurant lists may be scattered across thousands of independent information sources in many di erent formats. In this paper, we consider the problem of extracting a relation for such a data type from all of these sources automatically. We present a technique which exploits the duality between sets of patterns and relations to grow the target relation starting from a small sample. To test our technique we use it to extract a relation of (author,title) pairs from the World Wide Web.",
"title": ""
},
{
"docid": "737ef89cc5f264dcb13be578129dca64",
"text": "We present a new approach to extracting keyphrases based on statistical language models. Our approach is to use pointwise KL-divergence between multiple language models for scoring both phraseness and informativeness, which can be unified into a single score to rank extracted phrases.",
"title": ""
}
] | [
{
"docid": "1a393c0789f4dddab690ec65d145424d",
"text": "INTRODUCTION: Microneedling procedures are growing in popularity for a wide variety of skin conditions. This paper comprehensively reviews the medical literature regarding skin needling efficacy and safety in all skin types and in multiple dermatologic conditions. METHODS: A PubMed literature search was conducted in all languages without restriction and bibliographies of relevant articles reviewed. Search terms included: \"microneedling,\" \"percutaneous collagen induction,\" \"needling,\" \"skin needling,\" and \"dermaroller.\" RESULTS: Microneedling is most commonly used for acne scars and cosmetic rejuvenation, however, treatment benefit has also been seen in varicella scars, burn scars, keloids, acne, alopecia, and periorbital melanosis, and has improved flap and graft survival, and enhanced transdermal delivery of topical products. Side effects were mild and self-limited, with few reports of post-inflammatory hyperpigmentation, and isolated reports of tram tracking, facial allergic granuloma, and systemic hypersensitivity. DISCUSS: Microneedling represents a safe, cost-effective, and efficacious treatment option for a variety of dermatologic conditions in all skin types. More double-blinded, randomized, controlled trials are required to make more definitive conclusions. J Drugs Dermatol. 2017;16(4):308-314..",
"title": ""
},
{
"docid": "fb7f079d104e81db41b01afe67cdf3b0",
"text": "In this paper, we address natural human-robot interaction (HRI) in a smart assisted living (SAIL) system for the elderly and the disabled. Two common HRI problems are studied: hand gesture recognition and daily activity recognition. For hand gesture recognition, we implemented a neural network for gesture spotting and a hierarchical hidden Markov model for context-based recognition. For daily activity recognition, a multisensor fusion scheme is developed to process motion data collected from the foot and the waist of a human subject. Experiments using a prototype wearable sensor system show the effectiveness and accuracy of our algorithms.",
"title": ""
},
{
"docid": "c5958b1ef21663b89e3823e9c33dc316",
"text": "The so-called “phishing” attacks are one of the important threats to individuals and corporations in today’s Internet. Combatting phishing is thus a top-priority, and has been the focus of much work, both on the academic and on the industry sides. In this paper, we look at this problem from a new angle. We have monitored a total of 19,066 phishing attacks over a period of ten months and found that over 90% of these attacks were actually replicas or variations of other attacks in the database. This provides several opportunities and insights for the fight against phishing: first, quickly and efficiently detecting replicas is a very effective prevention tool. We detail one such tool in this paper. Second, the widely held belief that phishing attacks are dealt with promptly is but an illusion. We have recorded numerous attacks that stay active throughout our observation period. This shows that the current prevention techniques are ineffective and need to be overhauled. We provide some suggestions in this direction. Third, our observation give a new perspective into the modus operandi of attackers. In particular, some of our observations suggest that a small group of attackers could be behind a large part of the current attacks. Taking down that group could potentially have a large impact on the phishing attacks observed today.",
"title": ""
},
{
"docid": "de983364ef0ef446b18da0054765c84c",
"text": "Plastic debris is accumulating on the beaches of Kauai at an alarming rate, averaging 484 pieces/day in one locality. Particles sampled were analyzed to determine the effects of mechanical and chemical processes on the breakdown of polymers in a subtropical setting. Scanning electron microscopy (SEM) indicates that plastic surfaces contain fractures, horizontal notches, flakes, pits, grooves, and vermiculate textures. The mechanically produced textures provide ideal loci for chemical weathering to occur which further weakens the polymer surface leading to embrittlement. Fourier transform infrared spectroscopy (FTIR) results show that some particles have highly oxidized surfaces as indicated by intense peaks in the lower wavenumber region of the spectra. Our textural analyses suggest that polyethylene has the potential to degrade more readily than polypropylene. Further evaluation of plastic degradation in the natural environment may lead to a shift away from the production and use of plastic materials with longer residence times.",
"title": ""
},
{
"docid": "c68dac8613bfd8984045c95a92211bc3",
"text": "This paper analyses alternative techniques for deploying low-cost human resources for data acquisition for classifier induction in domains exhibiting extreme class imbalance - where traditional labeling strategies, such as active learning, can be ineffective. Consider the problem of building classifiers to help brands control the content adjacent to their on-line advertisements. Although frequent enough to worry advertisers, objectionable categories are rare in the distribution of impressions encountered by most on-line advertisers - so rare that traditional sampling techniques do not find enough positive examples to train effective models. An alternative way to deploy human resources for training-data acquisition is to have them \"guide\" the learning by searching explicitly for training examples of each class. We show that under extreme skew, even basic techniques for guided learning completely dominate smart (active) strategies for applying human resources to select cases for labeling. Therefore, it is critical to consider the relative cost of search versus labeling, and we demonstrate the tradeoffs for different relative costs. We show that in cost/skew settings where the choice between search and active labeling is equivocal, a hybrid strategy can combine the benefits.",
"title": ""
},
{
"docid": "47386df9012dfb99aafb7bfd11ac5e66",
"text": "Multilevel modeling is a technique that has numerous potential applications for social and personality psychology. To help realize this potential, this article provides an introduction to multilevel modeling with an emphasis on some of its applications in social and personality psychology. This introduction includes a description of multilevel modeling, a rationale for this technique, and a discussion of applications of multilevel modeling in social and personality psychological research. Some of the subtleties of setting up multilevel analyses and interpreting results are presented, and software options are discussed. Once you know that hierarchies exist, you see them everywhere. (Kreft and de Leeuw, 1998, 1) Whether by design or nature, research in personality and social psychology and related disciplines such as organizational behavior increasingly involves what are often referred to as multilevel data. Sometimes, such data sets are referred to as ‘nested’ or ‘hierarchically nested’ because observations (also referred to as units of analysis) at one level of analysis are nested within observations at another level. For example, in a study of classrooms or work groups, individuals are considered to be nested within groups. Similarly, in diary-style studies, observations (e.g., diary entries) are nested within persons. What is particularly important for present purposes is that when you have multilevel data, you need to analyze them using techniques that take into account this nesting. As discussed below (and in numerous places, including Nezlek, 2001), the results of analyses of multilevel data that do not take into account the multilevel nature of the data may (or perhaps will) be inaccurate. This article is intended to acquaint readers with the basics of multilevel modeling. For researchers, it is intended to provide a basis for further study. I think that a lack of understanding of how to think in terms of hierarchies and a lack of understanding of how to analyze such data inhibits researchers from applying the ‘multilevel perspective’ to their work. For those simply interested in understanding what multilevel",
"title": ""
},
{
"docid": "f29b8c75a784a71dfaac5716017ff4f3",
"text": "The objective of this paper is to design a multi-agent system architecture for the Scrum methodology. Scrum is an iterative, incremental framework for software development which is flexible, adaptable and highly productive. An agent is a system situated within and a part of an environment that senses the environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future (Franklin and Graesser, 1996). To our knowledge, this is first attempt to include software agents in the Scrum framework. Furthermore, our design covers all the stages of software development. Alternative approaches were only restricted to the analysis and design phases. This Multi-Agent System (MAS) Architecture for Scrum acts as a design blueprint and a baseline architecture that can be realised into a physical implementation by using an appropriate agent development framework. The development of an experimental prototype for the proposed MAS Architecture is in progress. It is expected that this tool will provide support to the development team who will no longer be expected to report, update and manage non-core activities daily.",
"title": ""
},
{
"docid": "d2e0c2837d8674f17482013508920f14",
"text": "The effect of dietary creatine and supplementation on skeletal muscle creatine accumulation and subsequent degradation and on urinary creatinine excretion was investigated in 31 male subjects who ingested creatine in different quantities over varying time periods. Muscle total creatine concentration increased by approximately 20% after 6 days of creatine supplementation at a rate of 20 g/day. This elevated concentration was maintained when supplementation was continued at a rate of 2 g/day for a further 30 days. In the absence of 2 g/day supplementation, total creatine concentration gradually declined, such that 30 days after the cessation of supplementation the concentration was no different from the presupplementation value. During this period, urinary creatinine excretion was correspondingly increased. A similar, but more gradual, 20% increase in muscle total creatine concentration was observed over a period of 28 days when supplementation was undertaken at a rate of 3 g/day. In conclusion, a rapid way to \"creatine load\" human skeletal muscle is to ingest 20 g of creatine for 6 days. This elevated tissue concentration can then be maintained by ingestion of 2 g/day thereafter. The ingestion of 3 g creatine/day is in the long term likely to be as effective at raising tissue levels as this higher dose.",
"title": ""
},
{
"docid": "ec105642406ba9111485618e85f5b7cd",
"text": "We present simulations of evacuation processes using a recently introduced cellular automaton model for pedestrian dynamics. This model applies a bionics approach to describe the interaction between the pedestrians using ideas from chemotaxis. Here we study a rather simple situation, namely the evacuation from a large room with one or two doors. It is shown that the variation of the model parameters allows to describe different types of behaviour, from regular to panic. We find a nonmonotonic dependence of the evacuation times on the coupling constants. These times depend on the strength of the herding behaviour, with minimal evacuation times for some intermediate values of the couplings, i.e. a proper combination of herding and use of knowledge about the shortest way to the exit.",
"title": ""
},
{
"docid": "b8def7be21f014693589ae99385412dd",
"text": "Automatic image captioning has received increasing attention in recent years. Although there are many English datasets developed for this problem, there is only one Turkish dataset and it is very small compared to its English counterparts. Creating a new dataset for image captioning is a very costly and time consuming task. This work is a first step towards transferring the available, large English datasets into Turkish. We translated English captioning datasets into Turkish by using an automated translation tool and we trained an image captioning model on the automatically obtained Turkish captions. Our experiments show that this model yields the best performance so far on Turkish captioning.",
"title": ""
},
{
"docid": "3ec0da301eca04c32b321c3141d9a482",
"text": "The appearance of a large number of ontology tools may leave a user looking for an appropriate tool overwhelmed and uncertain on which tool to choose. Thus evaluation and comparison of these tools is important to help users determine which tool is best suited for their tasks. However, there is no “one size fits all” comparison framework for ontology tools: different classes of tools require very different comparison frameworks. For example, ontology-development tools can easily be compared to one another since they all serve the same task: define concepts, instances, and relations in a domain. Tools for ontology merging, mapping, and alignment however are so different from one another that direct comparison may not be possible. They differ in the type of input they require (e.g., instance data or no instance data), the type of output they produce (e.g., one merged ontology, pairs of related terms, articulation rules), modes of interaction and so on. This diversity makes comparing the performance of mapping tools to one another largely meaningless. We present criteria that partition the set of such tools in smaller groups allowing users to choose the set of tools that best fits their tasks. We discuss what resources we as a community need to develop in order to make performance comparisons within each group of merging and mapping tools useful and effective. These resources will most likely come as results of evaluation experiments of stand-alone tools. As an example of such an experiment, we discuss our experiences and results in evaluating PROMPT, an interactive ontology-merging tool. Our experiment produced some of the resources that we can use in more general evaluation. However, it has also shown that comparing the performance of different tools can be difficult since human experts do not agree on how ontologies should be merged, and we do not yet have a good enough metric for comparing ontologies. 1 Ontology-Mapping Tools Versus Ontology-Development Tools Consider two types of ontology tools: (1) tools for developing ontologies and (2) tools for mapping, aligning, or merging ontologies. By ontology-development tools (which we will call development tools in the paper) we mean ontology editors that allow users to define new concepts, relations, and instances. These tools usually have capabilities for importing and extending existing ontologies. Development tools may include graphical browsers, search capabilities, and constraint checking. Protégé-2000 [17], OntoEdit [19], OilEd [2], WebODE [1], and Ontolingua [7] are some examples of development tools. Tools for mapping, aligning, and merging ontologies (which we will call mapping tools) are the tools that help users find similarities and differences between source ontologies. Mapping tools either identify potential correspondences automatically or provide the environment for the users to find and define these correspondences, or both. Mapping tools are often extensions of development tools. Mapping tool and algorithm examples include PROMPT[16], ONION [13], Chimaera [11], FCA-Merge [18], GLUE [5], and OBSERVER [12]. Even though theories on how to evaluate either type of tools are not well articulated at this point, there are already several frameworks for evaluating ontologydevelopment tools. For example, Duineveld and colleagues [6] in their comparison experiment used different development tools to represent the same domain ontology. Members of the Ontology-environments SIG in the OntoWeb initiative designed an extensive set of criteria for evaluating ontology-development tools and applied these criteria to compare a number of projects. Some of the aspects that these frameworks compare include: – interoperability with other tools and the ability to import and export ontologies in different representation languages; – expressiveness of the knowledge model; – scalability and extensibility; – availability and capabilities of inference services; – usability of the tools. Let us turn to the second class of ontology tools: tools for mapping, aligning, or merging ontologies. It is tempting to reuse many of the criteria from evaluation of development tools. For example, expressiveness of the underlying language is important and so is scalability and extensibility. We need to know if a mapping tool can work with ontologies from different languages. However, if we look at the mapping tools more closely, we see that their comparison and evaluation must be very different from the comparison and evaluation of development tools. All the ontology-development tools have very similar inputs and the desired outputs: we have a domain, possibly a set of ontologies to reuse, and a set of requirements for the ontology, and we need to use a tool to produce an ontology of that domain satisfying the requirements. Unlike the ontology-development tools, the 1 http://delicias.dia.fi.upm.es/ontoweb/sig-tools/ ontology-mapping tools vary with respect to the precise task that they perform, the inputs on which they operate and the outputs that they produce. First, the tasks for which the mapping tools are designed, differ greatly. On the one hand, all the tools are designed to find similarities and differences between source ontologies in one way or another. In fact, researchers have suggested a uniform framework for describing and analyzing this information regardless of what the final task is [3, 10]. On the other hand, from the user’s point of view the tools differ greatly in what tasks this analysis of similarities and differences supports. For example, Chimaera and PROMPT allow users to merge source ontologies into a new ontology that includes concepts from both sources. The output of ONION is a set of articulation rules between two ontologies; these rules define what the similarities and differences are. The articulation rules can later be used for querying and other tasks. The task of GLUE, AnchorPROMPT [14] and FCA-Merge is to provide a set of pairs of related concepts with some certainty factor associated with each pair. Second, different mapping tools rely on different inputs: Some tools deal only with class hierarchies of the sources and are agnostic in their merging algorithms about slots or instances (e.g., Chimaera). Other tools use not only classes but also slots and value restrictions in their analysis (e.g., PROMPT). Other tools rely in their algorithms on the existence of instances in each of the source ontologies (e.g., GLUE). Yet another set of tools require not only that instances are present, but also that source ontologies share a set of instances (e.g., FCA-Merge). Some tools work independently and produce suggestions to the user at the end, allowing the user to analyze the suggestions (e.g., GLUE, FCAMerge). Some tools expect that the source ontologies follow a specific knowledgerepresentation paradigm (e.g., Description Logic for OBSERVER). Other tools rely heavily on interaction with the user and base their analysis not only on the source ontologies themselves but also on the merging or alignment steps that the user performs (e.g., PROMPT, Chimaera). Third, since the tasks that the mapping tools support differ greatly, the interaction between a user and a tool is very different from one tool to another. Some tools provide a graphical interface which allows users to compare the source ontologies visually, and accept or reject the results of the tool analysis (e.g., PROMPT, Chimaera, ONION), the goal of other tools is to run the algorithms which find correlations between the source ontologies and output the results to the user in a text file or on the terminal–the users must then use the results outside the tool itself. The goal of this paper is to start a discussion on a framework for evaluating ontology-mapping tools that would account for this great variety in underlying assumptions and requirements. We argue that many of the tools cannot be compared directly with one another because they are so different in the tasks that they support. We identify the criteria for determining the groups of tools that can be compared directly, define what resources we need to develop to make such comparison possible and discuss our experiences in evaluating our merging tool, PROMPT, as well as the results of this evaluation. 2 Requirements for Evaluating Mapping Tools Before we discuss the evaluation requirements for mapping tools, we must answer the following question which will certainly affect the requirements: what is the goal of such potential evaluation? It is tempting to say “find the best tool.” However, as we have just discussed, given the diversity in the tasks that the tools support, their modes of interaction, the input data they rely on, it is impossible to compare the tools to one another and to find one or even several measures to identify the “best” tool. Therefore, we suggest that the questions driving such evaluation must be user-oriented. A user may ask either what is the best tool for his task or whether a particular tool is good enough for his task. Depending on what the user’s source ontologies are, how much manual work he is willing to put in, how important the precision of the results is, one or another tool will be more appropriate. Therefore, the first set of evaluation criteria are pragmatic criteria. These criteria include but are not limited to the following: Input requirements What elements from the source ontologies does the tool use? Which of these elements does the tool require? This information may include: concept names, class hierarchy, slot definitions, facet values, slot values, instances. Does the tool require that source ontologies use a particular knowledge-representation paradigm? Level of user interaction Does the tool perform the comparison in a “batch mode,” presenting the results at the end, or is it an interactive tool where intermediate results are analyzed by the user, and the tool uses the feedback for further analysis? Type o",
"title": ""
},
{
"docid": "5fe7cf9d742f79263e804f164b48d208",
"text": "In this paper we consider the cognitive radio system based on spectrum sensing, and propose an error correction technique for its performance improvement. We analyze secondary user link based on Orthogonal Frequency-Division Multiplexing (OFDM), realized by using Universal Software Radio Peripheral N210 platforms. Parameters of low density parity check codes and interleaver that provide significant performance improvement for the acceptable decoding latency are identified. The experimental results will be compared with the Monte Carlo simulation results obtained by using the simplified channel models.",
"title": ""
},
{
"docid": "732d6bd47a4ab7b77d1c192315a1577c",
"text": "In this paper, we address the problem of classifying image sets, each of which contains images belonging to the same class but covering large variations in, for instance, viewpoint and illumination. We innovatively formulate the problem as the computation of Manifold-Manifold Distance (MMD), i.e., calculating the distance between nonlinear manifolds each representing one image set. To compute MMD, we also propose a novel manifold learning approach, which expresses a manifold by a collection of local linear models, each depicted by a subspace. MMD is then converted to integrating the distances between pair of subspaces respectively from one of the involved manifolds. The proposed MMD method is evaluated on the task of Face Recognition based on Image Set (FRIS). In FRIS, each known subject is enrolled with a set of facial images and modeled as a gallery manifold, while a testing subject is modeled as a probe manifold, which is then matched against all the gallery manifolds by MMD. Identification is achieved by seeking the minimum MMD. Experimental results on two public face databases, Honda/UCSD and CMU MoBo, demonstrate that the proposed MMD method outperforms the competing methods.",
"title": ""
},
{
"docid": "60182038191a764fd7070e8958185718",
"text": "Shales of very low metamorphic grade from the 2.78 to 2.45 billion-year-old (Ga) Mount Bruce Supergroup, Pilbara Craton, Western Australia, were analyzed for solvent extractable hydrocarbons. Samples were collected from ten drill cores and two mines in a sampling area centered in the Hamersley Basin near Wittenoom and ranging 200 km to the southeast, 100 km to the southwest and 70 km to the northwest. Almost all analyzed kerogenous sedimentary rocks yielded solvent extractable organic matter. Concentrations of total saturated hydrocarbons were commonly in the range of 1 to 20 ppm ( g/g rock) but reached maximum values of 1000 ppm. The abundance of aromatic hydrocarbons was 1 to 30 ppm. Analysis of the extracts by gas chromatography-mass spectrometry (GC-MS) and GC-MS metastable reaction monitoring (MRM) revealed the presence of n-alkanes, midand end-branched monomethylalkanes, -cyclohexylalkanes, acyclic isoprenoids, diamondoids, trito pentacyclic terpanes, steranes, aromatic steroids and polyaromatic hydrocarbons. Neither plant biomarkers nor hydrocarbon distributions indicative of Phanerozoic contamination were detected. The host kerogens of the hydrocarbons were depleted in C by 2 to 21‰ relative ton-alkanes, a pattern typical of, although more extreme than, other Precambrian samples. Acyclic isoprenoids showed carbon isotopic depletion relative to n-alkanes and concentrations of 2 -methylhopanes were relatively high, features rarely observed in the Phanerozoic but characteristic of many other Precambrian bitumens. Molecular parameters, including sterane and hopane ratios at their apparent thermal maxima, condensate-like alkane profiles, high monoand triaromatic steroid maturity parameters, high methyladamantane and methyldiamantane indices and high methylphenanthrene maturity ratios, indicate thermal maturities in the wet-gas generation zone. Additionally, extracts from shales associated with iron ore deposits at Tom Price and Newman have unusual polyaromatic hydrocarbon patterns indicative of pyrolytic dealkylation. The saturated hydrocarbons and biomarkers in bitumens from the Fortescue and Hamersley Groups are characterized as ‘probably syngenetic with their Archean host rock’ based on their typical Precambrian molecular and isotopic composition, extreme maturities that appear consistent with the thermal history of the host sediments, the absence of biomarkers diagnostic of Phanerozoic age, the absence of younger petroleum source rocks in the basin and the wide geographic distribution of the samples. Aromatic hydrocarbons detected in shales associated with iron ore deposits at Mt Tom Price and Mt Whaleback are characterized as ‘clearly Archean’ based on their hypermature composition and covalent bonding to kerogen. Copyright © 2003 Elsevier Ltd",
"title": ""
},
{
"docid": "f3f15a37a1d1a2a3a3647dc14f075297",
"text": "Stress is known to inhibit neuronal growth in the hippocampus. In addition to reducing the size and complexity of the dendritic tree, stress and elevated glucocorticoid levels are known to inhibit adult neurogenesis. Despite the negative effects of stress hormones on progenitor cell proliferation in the hippocampus, some experiences which produce robust increases in glucocorticoid levels actually promote neuronal growth. These experiences, including running, mating, enriched environment living, and intracranial self-stimulation, all share in common a strong hedonic component. Taken together, the findings suggest that rewarding experiences buffer progenitor cells in the dentate gyrus from the negative effects of elevated stress hormones. This chapter considers the evidence that stress and glucocorticoids inhibit neuronal growth along with the paradoxical findings of enhanced neuronal growth under rewarding conditions with a view toward understanding the underlying biological mechanisms.",
"title": ""
},
{
"docid": "fbee148ef2de028cc53a371c27b4d2be",
"text": "Desalination is a water-treatment process that separates salts from saline water to produce potable water or water that is low in total dissolved solids (TDS). Globally, the total installed capacity of desalination plants was 61 million m3 per day in 2008 [1]. Seawater desalination accounts for 67% of production, followed by brackish water at 19%, river water at 8%, and wastewater at 6%. Figure 1 show the worldwide feed-water percentage used in desalination. The most prolific users of desalinated water are in the Arab region, namely, Saudi Arabia, Kuwait, United Arab Emirates, Qatar, Oman, and Bahrain [2].",
"title": ""
},
{
"docid": "01cc1b289f68fa396655b9e374b6aaa9",
"text": "The biological mechanisms underlying long-term partner bonds in humans are unclear. The evolutionarily conserved neuropeptide oxytocin (OXT) is associated with the formation of partner bonds in some species via interactions with brain dopamine reward systems. However, whether it plays a similar role in humans has as yet not been established. Here, we report the results of a discovery and a replication study, each involving a double-blind, placebo-controlled, within-subject, pharmaco-functional MRI experiment with 20 heterosexual pair-bonded male volunteers. In both experiments, intranasal OXT treatment (24 IU) made subjects perceive their female partner's face as more attractive compared with unfamiliar women but had no effect on the attractiveness of other familiar women. This enhanced positive partner bias was paralleled by an increased response to partner stimuli compared with unfamiliar women in brain reward regions including the ventral tegmental area and the nucleus accumbens (NAcc). In the left NAcc, OXT even augmented the neural response to the partner compared with a familiar woman, indicating that this finding is partner-bond specific rather than due to familiarity. Taken together, our results suggest that OXT could contribute to romantic bonds in men by enhancing their partner's attractiveness and reward value compared with other women.",
"title": ""
},
{
"docid": "996f1743ca60efa05f5113a4459f8b61",
"text": "This paper presents a method for movie genre categorization of movie trailers, based on scene categorization. We view our approach as a step forward from using only low-level visual feature cues, towards the eventual goal of high-level seman- tic understanding of feature films. Our approach decom- poses each trailer into a collection of keyframes through shot boundary analysis. From these keyframes, we use state-of- the-art scene detectors and descriptors to extract features, which are then used for shot categorization via unsuper- vised learning. This allows us to represent trailers using a bag-of-visual-words (bovw) model with shot classes as vo- cabularies. We approach the genre classification task by mapping bovw temporally structured trailer features to four high-level movie genres: action, comedy, drama or horror films. We have conducted experiments on 1239 annotated trailers. Our experimental results demonstrate that exploit- ing scene structures improves film genre classification com- pared to using only low-level visual features.",
"title": ""
},
{
"docid": "954526bc72495a62e6205ca1b5d231f8",
"text": "We propose a novel decoding approach for neural machine translation (NMT) based on continuous optimisation. We reformulate decoding, a discrete optimization problem, into a continuous problem, such that optimization can make use of efficient gradient-based techniques. Our powerful decoding framework allows for more accurate decoding for standard neural machine translation models, as well as enabling decoding in intractable models such as intersection of several different NMT models. Our empirical results show that our decoding framework is effective, and can leads to substantial improvements in translations, especially in situations where greedy search and beam search are not feasible. Finally, we show how the technique is highly competitive with, and complementary to, reranking.",
"title": ""
}
] | scidocsrr |
0b02ff9a1750a5accbb6a69e13c8e8b4 | Fanning the Flames of Hate : Social Media and Hate Crime * | [
{
"docid": "8e1591b98c14a182125969bc12eda730",
"text": "A growing proportion of citizens rely on social media to gather political information and to engage in political discussions within their personal networks. Existing studies argue that social media create “echo-chambers,” where individuals are primarily exposed to likeminded views. However, this literature has ignored that social media platforms facilitate exposure to messages from those with whom individuals have weak ties, which are more likely to provide novel information to which individuals would not be exposed otherwise through offline interactions. Because weak ties tend to be with people who are more politically heterogeneous than citizens’ immediate personal networks, this exposure reduces political extremism. To test this hypothesis, I develop a new method to estimate dynamic ideal points for social media users. I apply this method to measure the ideological positions of millions of individuals in Germany, Spain, and the United States over time, as well as the ideological composition of their personal networks. Results from this panel design show that most social media users are embedded in ideologically diverse networks, and that exposure to political diversity has a positive effect on political moderation. This result is robust to the inclusion of covariates measuring offline political behavior, obtained by matching Twitter user profiles with publicly available voter files in several U.S. states. I also provide evidence from survey data in these three countries that bolsters these findings. Contrary to conventional wisdom, my analysis provides evidence that social media usage reduces mass political polarization. ∗Pablo Barberá (www.pablobarbera.com) is a Moore-Sloan Fellow at the NYU Center for Data Science. Mass political polarization is a signature phenomenon of our time. As such, it has received considerable scholarly and journalistic attention in recent years (see e.g. Abramowitz and Saunders, 2008 and Fiorina and Abrams, 2008). A growing body of work argues that the introduction of the Internet as a relevant communication tool is contributing to this trend (Farrell, 2012). Empirical evidence of persistent ideological sorting in online communication networks (Adamic and Glance, 2005; Conover et al., 2012; Colleoni, Rozza and Arvidsson, 2014) has been taken to suggest that Internet use may exacerbate mass political polarization. As Sunstein (2001) or Hindman (2008) argue, the Internet appears to create communities of like-minded individuals where cross-ideological interactions and exposure to political diversity are rare. This argument builds upon a long tradition of research that shows that political discussion in homogenous communication networks reinforces individuals’ existing attitudes (Berelson, Lazarsfeld and McPhee, 1954; Huckfeldt, 1995; Mutz, 2006) In this paper I challenge this conventional wisdom. I contend that social media usage – one of the most frequent online activities – reduces political polarization, and I provide empirical evidence to support this claim. My argument is two-fold. First, social media platforms like Facebook or Twitter increase incidental exposure to political messages shared by peers. Second, these sites facilitate exposure to messages from those with whom individuals have weak social ties (Granovetter, 1973), which are more likely to provide novel information. Consequently, despite the homophilic nature of personal networks (McPherson, Smith-Lovin and Cook, 2001), social media leads to exposure to a wider range of political opinions than one would normally encounter offline. This induces political moderation at the individual level and, counter intuitively, helps to decrease mass political polarization. To test this hypothesis, I develop a new method to measure the ideological positions of Twitter users at any point in time, and apply it to estimate the ideal points of millions of citizens in three countries with different levels of mass political polarization (Germany, Spain, and the United States). This measure allows me to observe not only how their political preferences evolve, but also the ideological composition of their communication networks. My approach represents a crucial improvement over survey studies of political networks, which often ask only about close discussion partners and in practice exclude weak ties, limiting researchers’ ability to study their influence. In addition, I rely on name identification techniques to match Twitter users with publicly available voter files in the states of Arkansas, California, Florida, Ohio, and Pennsylvania. This allows me to demonstrate that my results are not confounded by covariates measuring",
"title": ""
}
] | [
{
"docid": "7d53fcce145badeeaeff55b5299010b9",
"text": "Cloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions.",
"title": ""
},
{
"docid": "af7479706cd15bb91fc84fba4e194eec",
"text": "Wireless positioning has attracted much research attention and has become increasingly important in recent years. Wireless positioning has been found very useful for other applications besides E911 service, ranging from vehicle navigation and network optimization to resource management and automated billing. Although many positioning devices and services are currently available, it is necessary to develop an integrated and seamless positioning platform to provide a uniform solution for different network configurations. This article surveys the state-of-the-art positioning designs, focusing specifically on signal processing techniques in network-aided positioning. It serves as a tutorial for researchers and engineers interested in this rapidly growing field. It also provides new directions for future research for those who have been working in this field for many years.",
"title": ""
},
{
"docid": "6224b6e5d7cf7f48eccede10de743be2",
"text": "Tumor-associated macrophages (TAM) form a major component of the tumor stroma. However, important concepts such as TAM heterogeneity and the nature of the monocytic TAM precursors remain speculative. Here, we show for the first time that mouse mammary tumors contained functionally distinct subsets of TAMs and provide markers for their identification. Furthermore, in search of the TAM progenitors, we show that the tumor-monocyte pool almost exclusively consisted of Ly6C(hi)CX(3)CR1(low) monocytes, which continuously seeded tumors and renewed all nonproliferating TAM subsets. Interestingly, gene and protein profiling indicated that distinct TAM populations differed at the molecular level and could be classified based on the classic (M1) versus alternative (M2) macrophage activation paradigm. Importantly, the more M2-like TAMs were enriched in hypoxic tumor areas, had a superior proangiogenic activity in vivo, and increased in numbers as tumors progressed. Finally, it was shown that the TAM subsets were poor antigen presenters, but could suppress T-cell activation, albeit by using different suppressive mechanisms. Together, our data help to unravel the complexities of the tumor-infiltrating myeloid cell compartment and provide a rationale for targeting specialized TAM subsets, thereby optimally \"re-educating\" the TAM compartment.",
"title": ""
},
{
"docid": "8b908e2c7ed644371b37792a96207401",
"text": "Most websites, services, and applications have come to rely on Internet services (e.g., DNS, CDN, email, WWW, etc.) offered by third parties. Although employing such services generally improves reliability and cost-effectiveness, it also creates dependencies on service providers, which may expose websites to additional risks, such as DDoS attacks or cascading failures. As cloud services are becoming more popular, an increasing percentage of the overall Internet ecosystem relies on a decreasing number of highly popular services. In our general effort to assess the security risk for a given entity, and motivated by the effects of recent service disruptions, we perform a large-scale analysis of passive and active DNS datasets including more than 2.5 trillion queries in order to discover the dependencies between websites and Internet services.\n In this paper, we present the findings of our DNS dataset analysis, and attempt to expose important insights about the ecosystem of dependencies. To further understand the nature of dependencies, we perform graph-theoretic analysis on the dependency graph and propose support power, a novel power measure that can quantify the amount of dependence websites and other services have on a particular service. Our DNS analysis findings reveal that the current service ecosystem is dominated by a handful of popular service providers---with Amazon being the leader, by far---whose popularity is steadily increasing. These findings are further supported by our graph analysis results, which also reveals a set of less-popular services that many (regional) websites depend on.",
"title": ""
},
{
"docid": "eff407fb0d45ebeea3d5965b7b5df14b",
"text": "In order to develop intelligent systems that attain the trust of their users, it is important to understand how users perceive such systems and develop those perceptions over time. We present an investigation into how users come to understand an intelligent system as they use it in their daily work. During a six-week field study, we interviewed eight office workers regarding the operation of a system that predicted their managers' interruptibility, comparing their mental models to the actual system model. Our results show that by the end of the study, participants were able to discount some of their initial misconceptions about what information the system used for reasoning about interruptibility. However, the overarching structures of their mental models stayed relatively stable over the course of the study. Lastly, we found that participants were able to give lay descriptions attributing simple machine learning concepts to the system despite their lack of technical knowledge. Our findings suggest an appropriate level of feedback for user interfaces of intelligent systems, provide a baseline level of complexity for user understanding, and highlight the challenges of making users aware of sensed inputs for such systems.",
"title": ""
},
{
"docid": "691213a1d26e3c11f13e38453301cbc2",
"text": "Numerous studies of the electrophysiology and neuropathology of temporal lobe epilepsy have demonstrated the mesial temporal structures to be the site of seizure origin in the majority of cases. This is the rationale for a transcortical selective approach, first introduced by Niemeyer, for removal of the hippocampus and amygdala. Series from a number of centers have demonstrated the efficacy of selective amygdalohippocampectomy compared to a more traditional resection. The technique described here and used at the Montreal Neurological Institute (MNI) utilizes a strictly endopial resection of the hippocampal formation and amygdala in addition to computer image guidance to perform the procedure. Ninety-five percent of patients at the MNI who underwent selective amygdalohippocampectomy realized a cessation of seizures, or greater than 90% reduction, with minimal risk of complications.",
"title": ""
},
{
"docid": "350868c68de72786866173c2f6e8ae90",
"text": "We introduce kernel entropy component analysis (kernel ECA) as a new method for data transformation and dimensionality reduction. Kernel ECA reveals structure relating to the Renyi entropy of the input space data set, estimated via a kernel matrix using Parzen windowing. This is achieved by projections onto a subset of entropy preserving kernel principal component analysis (kernel PCA) axes. This subset does not need, in general, to correspond to the top eigenvalues of the kernel matrix, in contrast to the dimensionality reduction using kernel PCA. We show that kernel ECA may produce strikingly different transformed data sets compared to kernel PCA, with a distinct angle-based structure. A new spectral clustering algorithm utilizing this structure is developed with positive results. Furthermore, kernel ECA is shown to be an useful alternative for pattern denoising.",
"title": ""
},
{
"docid": "ece965df2822fa177a87bb1d41405d52",
"text": "Sexual murders and sexual serial killers have always been of popular interest with the public. Professionals are still mystified as to why sexual killers commit the “ultimate crime” of both sexual assault and homicide. Questions emerge as to why some sexual offenders kill one time vs in a serial manner. It is understood that the vast majority of sexual offenders such as pedophiles and adult rapists do NOT kill their victims. The purpose of this chapter is to explore serial sexual murder in terms of both theoretical and clinical parameters in an attempt to understand why they commit the “ultimate crime.” We will also examine the similarities and differences between serial sexual murderers and typical rape offenders who do not kill their victims. Using real-life examples of wellknown serial killers, we will compare the “theoretical” with the “practical;” what happened, why it happened, and what we may be able to do about it. The authors of this chapter present two perspectives: (1) A developmental motivational view as to why serial killers commit these homicides, and (2) Implications for treatment of violent offenders. To adequately present these perspectives, we must look at four distinct areas: (1) Differentiating between the two types of “lust” murderers i.e. rapists and sexual serial killers, (2) Examining personality or lifestyle themes, (3) Exploration of the mind-body developmental process, and (4) treatment applications for violent offenders.",
"title": ""
},
{
"docid": "ac168ff92c464cb90a9a4ca0eb5bfa5c",
"text": "Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.",
"title": ""
},
{
"docid": "5d3275250a345b5f8c8a14a394025a31",
"text": "Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach.",
"title": ""
},
{
"docid": "ce10550373e7e6dd28ceeb88dcafc8cc",
"text": "This paper introduces a method giving dimensions of inline evanescent-mode ridge waveguide bandpass filters. Evanescent mode couplings are evaluated individually, without optimization of the entire filter. This is obtained through an improved network model of the evanescent-mode coupling, together with novel analytical formulas to correct the resonators slope parameters. Unlike prior works based on full-wave optimization of the overall structure, this method is fast and leads to accurate bandwidth results. Several filter examples are included to support the design method. A prototype filter has been manufactured and the RF measurements are in good agreement with theory.",
"title": ""
},
{
"docid": "4fea6fb309d496f9b4fd281c80a8eed7",
"text": "Network alignment is the problem of matching the nodes of two graphs, maximizing the similarity of the matched nodes and the edges between them. This problem is encountered in a wide array of applications---from biological networks to social networks to ontologies---where multiple networked data sources need to be integrated. Due to the difficulty of the task, an accurate alignment can rarely be found without human assistance. Thus, it is of great practical importance to develop network alignment algorithms that can optimally leverage experts who are able to provide the correct alignment for a small number of nodes. Yet, only a handful of existing works address this active network alignment setting.\n The majority of the existing active methods focus on absolute queries (\"are nodes a and b the same or not?\"), whereas we argue that it is generally easier for a human expert to answer relative queries (\"which node in the set b1,...,bn is the most similar to node a?\"). This paper introduces two novel relative-query strategies, TopMatchings and GibbsMatchings, which can be applied on top of any network alignment method that constructs and solves a bipartite matching problem. Our methods identify the most informative nodes to query by sampling the matchings of the bipartite graph associated to the network-alignment instance.\n We compare the proposed approaches to several commonly-used query strategies and perform experiments on both synthetic and real-world datasets. Our sampling-based strategies yield the highest overall performance, outperforming all the baseline methods by more than 15 percentage points in some cases. In terms of accuracy, TopMatchings and GibbsMatchings perform comparably. However, GibbsMatchings is significantly more scalable, but it also requires hyperparameter tuning for a temperature parameter.",
"title": ""
},
{
"docid": "45c1119cd76ed4f1470ac398caf6d192",
"text": "UNLABELLED\nL-3,4-Dihydroxy-6-(18)F-fluoro-phenyl-alanine ((18)F-FDOPA) is an amino acid analog used to evaluate presynaptic dopaminergic neuronal function. Evaluation of tumor recurrence in neurooncology is another application. Here, the kinetics of (18)F-FDOPA in brain tumors were investigated.\n\n\nMETHODS\nA total of 37 patients underwent 45 studies; 10 had grade IV, 10 had grade III, and 13 had grade II brain tumors; 2 had metastases; and 2 had benign lesions. After (18)F-DOPA was administered at 1.5-5 MBq/kg, dynamic PET images were acquired for 75 min. Images were reconstructed with iterative algorithms, and corrections for attenuation and scatter were applied. Images representing venous structures, the striatum, and tumors were generated with factor analysis, and from these, input and output functions were derived with simple threshold techniques. Compartmental modeling was applied to estimate rate constants.\n\n\nRESULTS\nA 2-compartment model was able to describe (18)F-FDOPA kinetics in tumors and the cerebellum but not the striatum. A 3-compartment model with corrections for tissue blood volume, metabolites, and partial volume appeared to be superior for describing (18)F-FDOPA kinetics in tumors and the striatum. A significant correlation was found between influx rate constant K and late uptake (standardized uptake value from 65 to 75 min), whereas the correlation of K with early uptake was weak. High-grade tumors had significantly higher transport rate constant k(1), equilibrium distribution volumes, and influx rate constant K than did low-grade tumors (P < 0.01). Tumor uptake showed a maximum at about 15 min, whereas the striatum typically showed a plateau-shaped curve. Patlak graphical analysis did not provide accurate parameter estimates. Logan graphical analysis yielded reliable estimates of the distribution volume and could separate newly diagnosed high-grade tumors from low-grade tumors.\n\n\nCONCLUSION\nA 2-compartment model was able to describe (18)F-FDOPA kinetics in tumors in a first approximation. A 3-compartment model with corrections for metabolites and partial volume could adequately describe (18)F-FDOPA kinetics in tumors, the striatum, and the cerebellum. This model suggests that (18)F-FDOPA was transported but not trapped in tumors, unlike in the striatum. The shape of the uptake curve appeared to be related to tumor grade. After an early maximum, high-grade tumors had a steep descending branch, whereas low-grade tumors had a slowly declining curve, like that for the cerebellum but on a higher scale.",
"title": ""
},
{
"docid": "b72e2a03a7508cf1394ba80d9d9fc009",
"text": "Accurate mediastinal lymph node dissection during thoracotomy is mandatory for staging and for adjuvant therapy in lung cancer. Pre-therapeutic staging for neoadjuvant therapy or for video assisted thoracoscopic resection of lung cancer is achieved usually by CT-scan and mediastinoscopy. However, these methods do not reach the accuracy of open nodal dissection. Therefore we developed a technique of radical video-assisted mediastinoscopic lymphadenectomy (VAMLA). This study was designed to show that VAMLA is feasible and that radicality of lymphadenectomy is comparable to the open procedure.In a prospective study all VAMLA procedures were registered and followed up in a database. Specimens of VAMLA were analysed by a single pathologist. Lymph nodes were counted and compared to open lymphadenectomy. The weight of the dissected tissue was documented. In patients receiving tumour resection subsequently to VAMLA, radicality of the previous mediastinoscopic dissection was controlled during thoracotomy.37 patients underwent video-assisted mediastinoscopy from June 1999 to April 2000. Mean duration of anaesthesia was 84.6 (SD 35.8) minutes.In 7 patients radical lymphadenectomy was not intended because of bulky nodal disease or benign disease. The remaining 30 patients underwent complete systematic nodal dissection as VAMLA.18 patients received tumour resection subsequently (12 right- and 6 left-sided thoracotomies). These thoracotomies allowed open re-dissection of 12 paratracheal regions, 10 of which were found free of lymphatic tissue. In two patients, 1 and 2 left over paratracheal nodes were counted respectively. 10/18 re-dissected subcarinal regions were found to be radically dissected by VAMLA. In 6 patients one single node and in the remaining 2 cases 5 and 8 nodes were found, respectively. However these counts also included nodes from the ipsilateral main bronchus. None of these nodes was positive for tumour.Average weight of the tissue that was harvested by VAMLA was 10.1 g (2.2-23.7, SD 6.3). An average number of 20.5 (6-60, SD 12.5) nodes per patient were counted in the specimens. This is comparable to our historical data from open lymphadenectomy.One palsy of the recurrent nerve in a patient with extensive preparation of the nerve and resection of 11 left-sided enlarged nodes was the only severe complication in this series.VAMLA seems to accomplish mediastinal nodal dissection comparable to open lymphadenectomy and supports video assisted surgery for lung cancer. In neoadjuvant setting a correct mediastinal N-staging is achieved.",
"title": ""
},
{
"docid": "f7e3ee26413525acea763f7d4635ebab",
"text": "Network Attached Storage (NAS) and Virtual Machines (VMs) are widely used in data centers thanks to their manageability, scalability, and ability to consolidate resources. But the shift from physical to virtual clients drastically changes the I/O workloads seen on NAS servers, due to guest file system encapsulation in virtual disk images and the multiplexing of request streams from different VMs. Unfortunately, current NAS workload generators and benchmarks produce workloads typical to physical machines. This paper makes two contributions. First, we studied the extent to which virtualization is changing existing NAS workloads. We observed significant changes, including the disappearance of file system meta-data operations at the NAS layer, changed I/O sizes, and increased randomness. Second, we created a set of versatile NAS benchmarks to synthesize virtualized workloads. This allows us to generate accurate virtualized workloads without the effort and limitations associated with setting up a full virtualized environment. Our experiments demonstrate that the relative error of our virtualized benchmarks, evaluated across 11 parameters, averages less than 10%.",
"title": ""
},
{
"docid": "699ef9eecd9d7fbef01930915c3480f0",
"text": "Disassembly of the cone-shaped HIV-1 capsid in target cells is a prerequisite for establishing a life-long infection. This step in HIV-1 entry, referred to as uncoating, is critical yet poorly understood. Here we report a novel strategy to visualize HIV-1 uncoating using a fluorescently tagged oligomeric form of a capsid-binding host protein cyclophilin A (CypA-DsRed), which is specifically packaged into virions through the high-avidity binding to capsid (CA). Single virus imaging reveals that CypA-DsRed remains associated with cores after permeabilization/removal of the viral membrane and that CypA-DsRed and CA are lost concomitantly from the cores in vitro and in living cells. The rate of loss is modulated by the core stability and is accelerated upon the initiation of reverse transcription. We show that the majority of single cores lose CypA-DsRed shortly after viral fusion, while a small fraction remains intact for several hours. Single particle tracking at late times post-infection reveals a gradual loss of CypA-DsRed which is dependent on reverse transcription. Uncoating occurs both in the cytoplasm and at the nuclear membrane. Our novel imaging assay thus enables time-resolved visualization of single HIV-1 uncoating in living cells, and reveals the previously unappreciated spatio-temporal features of this incompletely understood process.",
"title": ""
},
{
"docid": "d0df1484ea03e91489e8916130392506",
"text": "Most of the conventional face hallucination methods assume the input image is sufficiently large and aligned, and all require the input image to be noise-free. Their performance degrades drastically if the input image is tiny, unaligned, and contaminated by noise. In this paper, we introduce a novel transformative discriminative autoencoder to 8X super-resolve unaligned noisy and tiny (16X16) low-resolution face images. In contrast to encoder-decoder based autoencoders, our method uses decoder-encoder-decoder networks. We first employ a transformative discriminative decoder network to upsample and denoise simultaneously. Then we use a transformative encoder network to project the intermediate HR faces to aligned and noise-free LR faces. Finally, we use the second decoder to generate hallucinated HR images. Our extensive evaluations on a very large face dataset show that our method achieves superior hallucination results and outperforms the state-of-the-art by a large margin of 1.82dB PSNR.",
"title": ""
},
{
"docid": "88e9a282434e95a43366df7dfdf18a94",
"text": "Traditional approaches to building a large scale knowledge graph have usually relied on extracting information (entities, their properties, and relations between them) from unstructured text (e.g. Dbpedia). Recent advances in Convolutional Neural Networks (CNN) allow us to shift our focus to learning entities and relations from images, as they build robust models that require little or no pre-processing of the images. In this paper, we present an approach to identify and extract spatial relations (e.g., The girl is standing behind the table) from images using CNNs. Our research addresses two specific challenges: providing insight into how spatial relations are learned by the network and which parts of the image are used to predict these relations. We use the pre-trained network VGGNet to extract features from an image and train a Multi-layer Perceptron (MLP) on a set of synthetic images and the sun09 dataset to extract spatial relations. The MLP predicts spatial relations without a bounding box around the objects or the space in the image depicting the relation. To understand how the spatial relations are represented in the network, a heatmap is overlayed on the image to show the regions that are deemed important by the network. Also, we analyze the MLP to show the relationship between the activation of consistent groups of nodes and the prediction of a spatial relation. We show how the loss of these groups affects the network's ability to identify relations.",
"title": ""
},
{
"docid": "9c153be5ea6638cda30b107af75c6937",
"text": "Learning to rank studies have mostly focused on query-dependent and query-independent document features, which enable the learning of ranking models of increased effectiveness. Modern learning to rank techniques based on regression trees can support query features, which are document-independent, and hence have the same values for all documents being ranked for a query. In doing so, such techniques are able to learn sub-trees that are specific to certain types of query. However, it is unclear which classes of features are useful for learning to rank, as previous studies leveraged anonymised features. In this work, we examine the usefulness of four classes of query features, based on topic classification, the history of the query in a query log, the predicted performance of the query, and the presence of concepts such as persons and organisations in the query. Through experiments on the ClueWeb09 collection, our results using a state-of-the-art learning to rank technique based on regression trees show that all four classes of query features can significantly improve upon an effective learned model that does not use any query feature.",
"title": ""
},
{
"docid": "6f679c5678f1cc5fed0af517005cb6f5",
"text": "In today's world of globalization, there is a serious need of incorporating semantics in Education Domain which is very significant with an ultimate goal of providing an efficient, adaptive and personalized learning environment. An attempt towards this goal has been made to develop an Education based Ontology with some capability to describe a semantic web based sharable knowledge. So as a contribution, this paper presents a revisit towards amalgamating Semantics in Education. In this direction, an effort has been made to construct an Education based Ontology using Protege 5.2.0, where a hierarchy of classes and subclasses have been defined along with their properties, relations, and instances. Finally, at the end of this paper an implementation is also presented involving query retrieval using DLquery illustrations.",
"title": ""
}
] | scidocsrr |
c0e5637419b982743b05ba2eb612611f | Novel Feature Extraction, Selection and Fusion for Effective Malware Family Classification | [
{
"docid": "fe05cc4e31effca11e2718ce05635a97",
"text": "In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradientbased approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker’s knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.",
"title": ""
}
] | [
{
"docid": "ddb2ba1118e28acf687208bff99ce53a",
"text": "We show that information about social relationships can be used to improve user-level sentiment analysis. The main motivation behind our approach is that users that are somehow \"connected\" may be more likely to hold similar opinions; therefore, relationship information can complement what we can extract about a user's viewpoints from their utterances. Employing Twitter as a source for our experimental data, and working within a semi-supervised framework, we propose models that are induced either from the Twitter follower/followee network or from the network in Twitter formed by users referring to each other using \"@\" mentions. Our transductive learning results reveal that incorporating social-network information can indeed lead to statistically significant sentiment classification improvements over the performance of an approach based on Support Vector Machines having access only to textual features.",
"title": ""
},
{
"docid": "a90547e8c5842cfc59d26f8228ea53b6",
"text": "Recent advancement in the processing power of onboard computers has encouraged engineers to impart visual feedbacks into various systems like mechatronics and internet of things. Applications ranging from CCTV surveillance to target detection and tracking using UAVs, there is a wide variety of demand on image processing techniques in terms of computational time and quality. In this scenario, developing generalised algorithms which gives a freedom to user in choosing the trade-off between quality and quick response is a challenging task. In this paper a novel boundary detection algorithm for segregating similar coloured objects in an image is presented, which accommodates a degree of freedom in choosing resolution of object detection to the detection time. This method uses colour based segmentation as preprocessing technique to reduce overall computational complexity. It is independent of the shape (convex or non-convex) and size of the object. Algorithm is developed using Open-CV libraries and implemented for separating similar coloured vehicles from an image of different vehicles on road. Implementation results showing different choices of boundary tightness and computation times are showcased.",
"title": ""
},
{
"docid": "09e98de8c53d4695ec7054c4d6451bce",
"text": "This paper presents an intelligent traffic management system using RFID technology. The system is capable of providing practically important traffic data which would aid in reducing the travel time for the users. Also, it can be used for other purposes like tracing of stolen cars, vehicles that evade traffic signals/tickets, toll collection or vehicle taxes etc. The system consists of a passive tag, an RFID reader, a microcontroller, a GPRS module, a high-speed server with a database system and a user module. Using RFID technology, this system collects the required data and calculates average speed of vehicles on each road of a city under consideration. It then transmits the acquired data i.e., average speed calculated at various junctions to the central computation server which calculates the time taken by a vehicle to travel in a particular road. Through Dijkstra's algorithm, the central server computes the fastest route to all the nodes (junctions) considering each node as the initial point in the city. Therefore, the system creates a map of shortest time paths of the whole city. This data is accessed by users through an interface module placed in their vehicles.",
"title": ""
},
{
"docid": "492b01d63bbe0e26522958e8d6147592",
"text": "In this paper, an original method to reduce the height of a dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The miniaturization technique consists in adding a capacitive load between vertical plates. The height of the radiating element is reduced to 0.1λ0, where λ0 is the wavelength at the lowest operation frequency for a Standing Wave Ratio (SWR) <2.5, which corresponds to a reduction factor of 37.5%. The measured input impedance bandwidth is 64% from 1.6 GHz to 3.1 GHz with a SWR <2.5.",
"title": ""
},
{
"docid": "3f8b8ef850aa838289265d175dfa7f1d",
"text": "If competitive equilibrium is defined as a situation in which prices are such that all arbitrage profits are eliminated, is it possible that a competitive economy always be in equilibrium? Clearly not, for then those who arbitrage make no (private) return from their (privately) costly activity. Hence the assumptions that all markets, including that for information, are always in equilibrium and always perfectly arbitraged are inconsistent when arbitrage is costly. We propose here a model in which there is an equilibrium degree of disequilibrium: prices reflect the information of informed individuals (arbitrageurs) but only partially, so that those who expend resources to obtain information do receive compensation. How informative the price system is depends on the number of individuals who are informed; but the number of individuals who are informed is itself an endogenous variable in the model. The model is the simplest one in which prices perform a well-articulated role in conveying information from the informed to the uninformed. When informed individuals observe information that the return to a security is going to be high, they bid its price up, and conversely when they observe information that the return is going to be low. Thus the price system makes publicly available the information obtained by informed individuals to the uniformed. In general, however, it does this imperfectly; this is perhaps lucky, for were it to do it perfectly, an equilibrium would not exist. In the introduction, we shall discuss the general methodology and present some conjectures concerning certain properties of the equilibrium. The remaining analytic sections of the paper are devoted to analyzing in detail an important example of our general model, in which our conjectures concerning the nature of the equilibrium can be shown to be correct. We conclude with a discussion of the implications of our approach and results, with particular emphasis on the relationship of our results to the literature on \"efficient capital markets.\"",
"title": ""
},
{
"docid": "3ab7d9715cd0d58ba07d1b3139f98378",
"text": "We describe two visual field maps, lateral occipital areas 1 (LO1) and 2 (LO2), in the human lateral occipital cortex between the dorsal part of visual area V3 and visual area V5/MT+. Each map contained a topographic representation of the contralateral visual hemifield. The eccentricity representations were shared with V1/V2/V3. The polar angle representation in LO1 extended from the lower vertical meridian (at the boundary with dorsal V3) through the horizontal to the upper vertical meridian (at the boundary with LO2). The polar angle representation in LO2 was the mirror-reversal of that in LO1. LO1 and LO2 overlapped with the posterior part of the object-selective lateral occipital complex and the kinetic occipital region (KO). The retinotopy and functional properties of LO1 and LO2 suggest that they correspond to two new human visual areas, which lack exact homologues in macaque visual cortex. The topography, stimulus selectivity, and anatomical location of LO1 and LO2 indicate that they integrate shape information from multiple visual submodalities in retinotopic coordinates.",
"title": ""
},
{
"docid": "69061b0997b9b81517d6263251164f6c",
"text": "Log-structured merge tree (LSM-tree)-based key-value stores are widely deployed in large-scale storage systems. The underlying reason is that the traditional relational databases cannot reach the high performance required by big-data applications. As high-throughput alternatives to relational databases, LSM-tree-based key-value stores can support high-throughput write operations and provide high sequential bandwidth in storage systems. However, the compaction process triggers write amplification and is confronted with the degraded write performance, especially under update-intensive workloads. To address this issue, we design a holistic key-value store to explorer near-data processing (NDP) and on-demand scheduling for compaction optimization in an LSM-tree key-value store, named DStore. DStore makes full use of various computing capacities in the host-side and device-side subsystems. DStore dynamically divides the whole host-side compaction tasks into the above two-side subsystems according to two-side different computing capabilities. Meanwhile, the device must be featured with an NDP model. The divided compaction tasks are performed by the host and the device in parallel. In DStore, the NDP-based devices exhibit low-latency and high-bandwidth performance, thus facilitating key-value stores. DStore not only accomplishes compaction for key-value stores but also improves the system performance. We implement our DStore prototype in a real-world platform, and different kinds of testbeds are employed in our experiment. LevelDB and a static compaction optimization using the NDP model (called Co-KV) are used to compare with the DStore in our evaluation. Results show that DStore achieves about $3.7 \\times $ performance improvement over LevelDB under the db_bench workload. In addition, DStore-enabled key-value stores outperform LevelDB by a factor of about $3.3 \\times $ and 77% in terms of throughput and latency under YCSB benchmark, respectively.",
"title": ""
},
{
"docid": "b776307764d3946fc4e7f6158b656435",
"text": "Recent development advances have allowed silicon (Si) semiconductor technology to approach the theoretical limits of the Si material; however, power device requirements for many applications are at a point that the present Si-based power devices can not handle. The requirements include higher blocking voltages, switching frequencies, efficiency, and reliability. To overcome these limitations, new semiconductor materials for power device applications are needed. For high power requirements, wide band gap semiconductors like silicon carbide (SiC), gallium nitride (GaN), and diamond with their superior electrical properties are likely candidates to replace Si in the near future. This paper compares all the aforementioned wide bandgap semiconductors with respect to their promise and applicability for power applications and predicts the future of power device semiconductor materials.",
"title": ""
},
{
"docid": "ca4f93646f4975239771a2f49c108569",
"text": "In this report we describe a case of the Zoon's balanitis in a boy with HIV (AIDS B2). The clinical presentation, failure of topical treatment, cure by circumcision, and the histopathology findings are presented.",
"title": ""
},
{
"docid": "cf5e440f064656488506d90285c7885d",
"text": "A key issue in delay tolerant networks (DTN) is to find the right node to store and relay messages. We consider messages annotated with the unique keywords describing themessage subject, and nodes also adds keywords to describe their mission interests, priority and their transient social relationship (TSR). To offset resource costs, an incentive mechanism is developed over transient social relationships which enrich enroute message content and motivate better semantically related nodes to carry and forward messages. The incentive mechanism ensures avoidance of congestion due to uncooperative or selfish behavior of nodes.",
"title": ""
},
{
"docid": "a2e3f77329445961b925b65c39c45fe9",
"text": "Sampling-based algorithms for path planning, such as the Rapidly-exploring Random Tree (RRT), have achieved great success, thanks to their ability to efficiently solve complex high-dimensional problems. However, standard versions of these algorithms cannot guarantee optimality or even high-quality for the produced paths. In recent years, variants of these methods, such as T-RRT, have been proposed to deal with cost spaces: by taking configuration-cost functions into account during the exploration process, they can produce high-quality (i.e., low-cost) paths. Other novel variants, such as RRT*, can deal with optimal path planning: they ensure convergence toward the optimal path, with respect to a given path-quality criterion. In this paper, we propose to solve a complex problem encompassing this two paradigms: optimal path planning in a cost space. For that, we develop two efficient sampling-based approaches that combine the underlying principles of RRT* and T-RRT. These algorithms, called T-RRT* and AT-RRT, offer the same asymptotic optimality guarantees as RRT*. Results presented on several classes of problems show that they converge faster than RRT* toward the optimal path, especially when the topology of the search space is complex and/or when its dimensionality is high.",
"title": ""
},
{
"docid": "85c32427a1a6c04e3024d22b03b26725",
"text": "Monte Carlo tree search (MCTS) is extremely popular in computer Go which determines each action by enormous simulations in a broad and deep search tree. However, human experts select most actions by pattern analysis and careful evaluation rather than brute search of millions of future interactions. In this paper, we propose a computer Go system that follows experts way of thinking and playing. Our system consists of two parts. The first part is a novel deep alternative neural network (DANN) used to generate candidates of next move. Compared with existing deep convolutional neural network (DCNN), DANN inserts recurrent layer after each convolutional layer and stacks them in an alternative manner. We show such setting can preserve more contexts of local features and its evolutions which are beneficial for move prediction. The second part is a long-term evaluation (LTE) module used to provide a reliable evaluation of candidates rather than a single probability from move predictor. This is consistent with human experts nature of playing since they can foresee tens of steps to give an accurate estimation of candidates. In our system, for each candidate, LTE calculates a cumulative reward after several future interactions when local variations are settled. Combining criteria from the two parts, our system determines the optimal choice of next move. For more comprehensive experiments, we introduce a new professional Go dataset (PGD), consisting of 253, 233 professional records. Experiments on GoGoD and PGD datasets show the DANN can substantially improve performance of move prediction over pure DCNN. When combining LTE, our system outperforms most relevant approaches and open engines based on",
"title": ""
},
{
"docid": "cb55daf6ada8e9caba80aa4f421fc395",
"text": "This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011-2015. We began right at the start of the KinectT Mrevolution when inexpensive infrared cameras providing image depth recordings became available. We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision using multimodal data in this area of application. Notably, we organized a series of challenges and made available several datasets we recorded for that purpose, including tens of thousands of videos, which are available to conduct further research. We also overview recent state of the art works on gesture recognition based on a proposed taxonomy for gesture recognition, discussing challenges and future lines of research.",
"title": ""
},
{
"docid": "6f5700dde97988b8bd95cd58956febfc",
"text": "The prolapse of one or several pelvic organs is a condition that has been known by medicine since its early days, and different therapeutic approaches have been proposed and accepted. But one of the main problems concerning the prolapse of pelvic organs is the need for a universal, clear and reliable staging method.Because the prolapse has been known and recognized as a disease for more than one hundred years, so are different systems proposed for its staging. But none has proved itself to respond to all the requirements of the medical community, so the vast majority were seen coming and going, failing to become the single most useful system for staging in pelvic organ prolapse (POP).The latest addition to the group of staging systems is the POP-Q system, which is becoming increasingly popular with specialists all over the world, because, although is not very simple as a concept, it helps defining the features of a prolapse at a level of completeness not reached by any other system to date. In this vision, the POP-Q system may reach the importance and recognition of the TNM system use in oncology.This paper briefly describes the POP-Q system, by comparison with other staging systems, analyzing its main features and the concept behind it.",
"title": ""
},
{
"docid": "e6d5f3c9a58afcceae99ff522d6dfa81",
"text": "Strategic information systems planning (SISP) is a key concern facing top business and information systems executives. Observers have suggested that both too little and too much SISP can prove ineffective. Hypotheses examine the expected relationship between comprehensiveness and effectiveness in five SISP planning phases. They predict a nonlinear, inverted-U relationship thus suggesting the existence of an optimal level of comprehensiveness. A survey collected data from 161 US information systems executives. After an extensive validation of the constructs, the statistical analysis supported the hypothesis in a Strategy Implementation Planning phase, but not in terms of the other four SISP phases. Managers may benefit from the knowledge that both too much and too little implementation planning may hinder SISP success. Future researchers should investigate why the hypothesis was supported for that phase, but not the others. q 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dfd70adb9d93b1733d9a2d2ba618673a",
"text": "Throughout most of human history, congenital anomalies were perceived as omens, portents, or punishments of supernatural origin. This concept is reflected in the term “monster,” probably derived from the Latin verb monstrare (to show or reveal). Other explanations for congenital abnormalities included witchcraft, astrological configurations, or emotional experiences of the pregnant mother. Malformed humans and animals also inspired many of the characters populating the literature, mythology, art, and religion of every culture. By the eighteenth century superstition still dominated public conceptions of malformations, but this topic was beginning to attract serious attention from physicians and scientists. Abnormalities such as conjoined twins were a popular subject for anatomists, who produced some superb morphological studies. However, scientific investigation of causes and mechanisms of abnormal development were delayed because of persisting support for the concept of preformation of embryos. Toward the end of the century, epigenesis finally achieved acceptance by leading scientists, opening the door to the investigation of normal and abnormal development. By the dawn of the nineteenth century, a foundation had been established for the study of abnormal development, which was destined to become one of the most productive of biomedical sciences. The ultimate mechanisms of normal and abnormal development are now explored at the molecular level, and the lives of countless individuals born with malformations have been greatly enhanced by advances in medicine and surgery. No other biomedical science provides a more colorful and instructive illustration of the long journey from superstition to understanding. The older literature in this field, in addition to its historical interest, is a source of knowledge and information of unique value for scientists and practitioners.",
"title": ""
},
{
"docid": "948ba9db3b8daebbfae90d81d906dc6c",
"text": "BACKGROUND\nRelapsed acute lymphoblastic leukemia (ALL) is difficult to treat despite the availability of aggressive therapies. Chimeric antigen receptor-modified T cells targeting CD19 may overcome many limitations of conventional therapies and induce remission in patients with refractory disease.\n\n\nMETHODS\nWe infused autologous T cells transduced with a CD19-directed chimeric antigen receptor (CTL019) lentiviral vector in patients with relapsed or refractory ALL at doses of 0.76×10(6) to 20.6×10(6) CTL019 cells per kilogram of body weight. Patients were monitored for a response, toxic effects, and the expansion and persistence of circulating CTL019 T cells.\n\n\nRESULTS\nA total of 30 children and adults received CTL019. Complete remission was achieved in 27 patients (90%), including 2 patients with blinatumomab-refractory disease and 15 who had undergone stem-cell transplantation. CTL019 cells proliferated in vivo and were detectable in the blood, bone marrow, and cerebrospinal fluid of patients who had a response. Sustained remission was achieved with a 6-month event-free survival rate of 67% (95% confidence interval [CI], 51 to 88) and an overall survival rate of 78% (95% CI, 65 to 95). At 6 months, the probability that a patient would have persistence of CTL019 was 68% (95% CI, 50 to 92) and the probability that a patient would have relapse-free B-cell aplasia was 73% (95% CI, 57 to 94). All the patients had the cytokine-release syndrome. Severe cytokine-release syndrome, which developed in 27% of the patients, was associated with a higher disease burden before infusion and was effectively treated with the anti-interleukin-6 receptor antibody tocilizumab.\n\n\nCONCLUSIONS\nChimeric antigen receptor-modified T-cell therapy against CD19 was effective in treating relapsed and refractory ALL. CTL019 was associated with a high remission rate, even among patients for whom stem-cell transplantation had failed, and durable remissions up to 24 months were observed. (Funded by Novartis and others; CART19 ClinicalTrials.gov numbers, NCT01626495 and NCT01029366.).",
"title": ""
},
{
"docid": "4b0b7dfa79556970e900a129d06e3b0c",
"text": "We present the science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems, targeting an evolution in technology, that might lead to impacts and benefits reaching into most areas of society. This roadmap was developed within the framework of the European Graphene Flagship and outlines the main targets and research areas as best understood at the start of this ambitious project. We provide an overview of the key aspects of graphene and related materials (GRMs), ranging from fundamental research challenges to a variety of applications in a large number of sectors, highlighting the steps necessary to take GRMs from a state of raw potential to a point where they might revolutionize multiple industries. We also define an extensive list of acronyms in an effort to standardize the nomenclature in this emerging field.",
"title": ""
},
{
"docid": "9a7e6d0b253de434e62eb6998ff05f47",
"text": "Since 1984, a person-century of effort has gone into building CYC, a universal schema of roughly 105 general concepts spanning human reality. Most of the time has been spent codifying knowledge about these concepts; approximately 106 commonsense axioms have been handcrafted for and entered into CYC's knowledge base, and millions more have been inferred and cached by CYC. This article examines the fundamental assumptions of doing such a large-scale project, reviews the technical lessons learned by the developers, and surveys the range of applications that are or soon will be enabled by the technology.",
"title": ""
}
] | scidocsrr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.